3
3 Comments

Vibe-coding doesn't fail at what to build. It fails at how much to build at once.

Hey IH — first post after a few weeks of reading and replying. Wanted to share the build that made me change how I work, because I think a lot of people here are quietly doing the same thing I was.
About four months ago, I started vibe-coding a research tool using loveable.dev. Plain-English prompts, ship a feature in a morning, "look at this thing I built before lunch."
It was fast. It was also a trap.
By week 6, I had a tool with surveys, interview workflows, card sorting, tree testing, affinity diagrams, personas, and a dashboard. Looked impressive. Felt like progress.
By week 9, every new feature I shipped was breaking two old ones. I'd add a new analysis flow, the surveys page would silently 500. I'd patch the surveys page, the persona builder would forget which project it belonged to. I started spending more days fixing regressions than building anything new.
Around week 10, I caught myself debugging a tree-test bug at 1 AM and realized I had no idea if a single user actually wanted tree tests. I had built it because I'd seen it on a competitor's pricing page. The whole stack was load-bearing on competitor envy.
So I did the thing I should have done in week 0. Cheapest test I could think of: I wrote three landing pages for three different framings of the product, $50 of ads behind each, and watched what people clicked, opened, and reached for a credit card on.
The results that mattered:
– The "all-in-one research suite" framing (what I'd been building toward): 0.7% CTR, 2 emails, 0 card-reaches.
– The "validate before you build" framing (one specific use case): 3.1% CTR, 29 emails, 9 card-reaches.
Same audience, same week, same money. The thing I'd been building was the thing the market wanted least.
The painful part wasn't that the data killed an idea. It was realizing how many of the regressions I'd been losing weeks to were on features the test said nobody wanted.
What I changed:

I stopped writing code on Mondays. Mondays are now for the cheapest possible test of whatever I'm about to build that week. If the test fails, the code never happens.
I deleted three feature surfaces from the app. Card sorting, tree testing, and the bespoke dashboard — all on the "vanity tier" of the validation. The regressions stopped almost immediately.
I rebuilt the homepage around the framing the test actually rewarded. Same product, sharper promise. Conversions up roughly 4× off a small base.

The vibe-coding loop is genuinely a superpower. But it's a superpower for building the wrong thing faster. It needs an outer loop — a 24-hour pre-build test — or it just compounds your wrong assumptions.
A pattern I now see in basically every "I'm stuck / I'm overbuilding / I'm breaking my own app" post on this forum: the build velocity is fine. The thing that's missing is a cheap signal before the velocity that says which features to skip.
Two questions for IH:

For folks who vibe-code (Cursor, Claude Code, v0, Lovable, etc.) — what's your outer loop? Or is the loop just "ship and see"?
Have you ever killed a feature based on pre-build evidence, before writing any code for it? If yes — what was the test? I'm trying to collect failure modes.

posted to Icon for group Ideas and Validation
Ideas and Validation
on May 6, 2026
  1. 1

    The landing-page test probably exposed something deeper too:

    “Research Rocket” is pulling you back toward the exact “all-in-one suite” positioning the data just disproved.

    It sounds broad, exploratory, multi-tool, feature-heavy.

    But the winning signal was the opposite:
    “validate before you build.”

    That framing is tighter, sharper, more urgent, and much more painful.

    The interesting part is that you already deleted features after the test.
    The brand may still be carrying the old product philosophy.

    A name like Exirra.com or Xevoa.com would fit the narrower validation-first positioning much better than “Research Rocket” if you keep moving in that direction.

    Especially because the product now sounds less like “research software”
    and more like decision infrastructure for founders trying not to waste months building the wrong thing.

  2. 1

    Vibe-coding is a superpower for building the wrong thing faster if there is no map to guide the velocity. Your Monday testing rule is a brilliant linter for product-market fit that prevents you from hard-coding your own assumptions. It is much easier to delete a failing landing page than to debug a feature that should have never existed.
    Trading "competitor envy" for actual market signals is the best way to keep your codebase lean and your late nights quiet.
    Did the "validate before you build" crowd point out any specific tool they were currently using as their messy workaround?

  3. 1

    That's exactly the thing with AI: it's just a tool. It can't build a successful product without humans. You need to find the right idea, to validate it, and to turn it into a product vision. Thanks for highlighting this in your story.

    Also, AI fails at building complex production-ready apps. That's true. It's better to start with a simple, focused solution and scale it later after market validation. Recently, I shared a guide to moving vibe-coded apps to production. I hope it helps founders out there.

Trending on Indie Hackers
I've been building for months and made $0. Here's the honest psychological reason — and it's not what I expected. User Avatar 166 comments Agencies charge $5,000 for a 60-second product demo video. I make mine for $0. Here's the exact workflow. User Avatar 152 comments This system tells you what’s working in your startup — every week User Avatar 51 comments 11 Weeks Ago I Had 0 Users. Now VIDI Has Reviewed $10M+ in Contracts - and I’m Opening a Small SAFE Round User Avatar 42 comments I built a health platform for my family because nobody has a clue what is going on User Avatar 15 comments Show IH: WeProcess. Integrated platform or another all-in-one stretched too thin? User Avatar 8 comments