18
7 Comments

Don’t build the product. Run the workflow with AI instead

We all love to build… but don’t. Before you build the product, test the workflow.

Use an AI agent like Manus to simulate the backend. Deliver value manually. And see if the outcome matters. This proves that the job is worth solving before you solve it.

Here’s how to do it today.

What Manus actually does

Manus is a general-purpose AI agent that runs in a sandboxed VM, with internet access, AI models, and tool integrations.

It can:

  • Spin up a sandboxed Linux environment
  • Use a browser like a human
  • Write and run Python code
  • Read and generate files
  • Track tasks across multiple steps
  • Process and generate multi-modal data (text, images, code)
  • Learn from interactions and self-correct

It doesn’t just generate content — it executes workflows. In other words: It does the actual job your future product is meant to do.

And that changes how we test ideas.

Example: “Build a resume-screening SaaS”

Let’s say your product idea is this: “A tool that lets hiring managers upload resumes and get a ranked list of best-fit candidates.”

You don’t need a backend. You don’t need to code parsing logic. You don’t need to design anything.

You just need to prove: Will someone hand me a folder of resumes — and care enough about the ranked output to use it? And do they ask to do it again?

That’s your test. Here’s how to run it.

Step-by-Step: How to test the product without building It

1. Set up a simple input form

Use Jotform, Notion, Tally, or Typeform.

Collect:

  • The job title / role they’re hiring
  • A ZIP of PDF resumes

No branding. No UI. Just the input.

2. Get one real user

Reach out directly — Slack, Twitter, LinkedIn.

Say: “I’m testing a lightweight resume-ranking tool for tech hiring. If you’ve got 15 seconds to upload your resumes, I’ll send back a ranked spreadsheet. No strings.”

You’re offering value. That’s it.

3. Open Manus and give it a prompt

Use a prompt like this:

I have a ZIP file of 20 PDF resumes for a machine learning engineering role.Please:- Unzip the files- Extract relevant info: name, education, skills, years of experience, GitHub links- Score candidates based on role fit- Output an Excel spreadsheet ranked by relevance, with reasoning in notes

Manus handles the rest:

  • Writes Python to parse the resumes
  • Generates structured data
  • Ranks candidates
  • Outputs a usable spreadsheet

You don’t touch code. You just orchestrate.

4. Deliver the output

Deliver the spreadsheet. And let them know they can do it again any time if they found it valuable.

That’s it. No signup. No product tour. No activation funnel. Just a result.

5. Observe the reaction

What you’re looking for:

  • Do they say thank you?
  • Do they ask to use it again?
  • Do they send it to someone else?

You just ran the entire product manually.

And you now have actual signal — not assumptions.

Does this work for every kind of product?

Not quite. This works when your product delivers a clear transformation: Input → processing → output. That’s where Manus shines.

But even if you're building something more complex — say, a social platform, marketplace, or collaborative tool — you can still use Manus to test parts of the experience:

  • Instead of simulating network effects, simulate the matching experience.
  • Instead of building the full community, test whether the conversation output is useful.
  • Instead of real-time features, test whether the final deliverable provides value.

You’re not testing the whole product — just the core job it’s meant to do.

And that’s often enough to decide whether to build it at all.

on September 11, 2025
  1. 1

    Love this mindset shift. Simulating the workflow with AI to validate early makes a lot of sense. One thing we found though is that once you move to production, orchestrating retries and state between steps tends to outgrow simple simulations. Anyone here run into that?

  2. 1

    Workflow simulation test with AI is a banger nowadays. At early stage it can challenge hard a founder's idea, but it is a clever tool to get the signal to pivot or double down.

  3. 1

    So basically, not all products need a backend upfront — if the core workflow can be tested manually or via AI, you validate value first and only build what’s truly needed. Do you see this approach working for more complex products too?

  4. 1

    This hits home. So many products fail not because the tech is bad, but because the core workflow was never validated. Using AI as a stand-in backend to test whether people actually care about the outcome feels like such a practical way to cut through the noise. Really solid advice.

  5. 1

    Interesting procedure!

  6. 1

    As a founder who’s obsessed with building and shipping, I think your post cuts right to the heart of what often slows startups down: the urge to “engineer” before we truly validate. The idea of using AI agents to simulate the backend and run workflows manually isn’t just efficient, it’s a mindset shift—focusing on outcomes before infrastructure.

    One unique perspective I’d add: By putting AI in the loop early, we’re not just testing market need—we’re also shaping how humans and AI will ultimately collaborate in the product experience. This approach helps reveal where an “AI in the backend” truly delivers value and where human judgment or user intervention is still needed, which can reshape product vision.

    In my own journey, I’ve found that starting with workflow validation using AI can surface surprising insights about the real problems to solve, often leading to a simpler and more impactful end product. It pushes us to ask, “How little do we need to build to prove value—and will AI let us deliver that value even before we write a line of production code?”

    Thanks for reminding us that sometimes the smartest build is, at first, not to build at all.

  7. 1

    every developer has the urge to dive right in / we get ahead of ourselves. this is a good reminder to slow down and test test test. thanks again.

  8. 1

    This comment was deleted 7 months ago.

Trending on Indie Hackers
I've been reading 50 indie builder posts a day for the past month. Here's the pattern nobody talks about. User Avatar 144 comments I shipped 3 features this weekend based entirely on community feedback. Here's what I built and why. User Avatar 124 comments $36K in 7 days: Why distribution beats product (early on) User Avatar 123 comments Finally reached 100 users in just 12 days 🚀 User Avatar 114 comments I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 55 comments I realized showing problems isn’t enough — so I built this User Avatar 32 comments