1
4 Comments

One User's Feedback Saved Me Weeks of Debugging

Amy joined my beta on Day 2.

On Day 3, she DMed me: "Save button wasn't working."

I checked. It worked for me. Must be a one-off, right?

Wrong.

Turns out, the save button was failing silently for a subset of users. No error messages. No logs. Just... nothing.

If Amy hadn't spoken up, I would have lost weeks of feedback from users who quietly churned.

What we found:

Silent failure in the morning page save function

No error tracking (we added Sentry immediately after)

Edge case with auth timing that only affected some users

What we fixed:

Added loading states (so users know something is happening)

Implemented fallback error UI

Set up Code Scary (Sentry) to email us on every error

Amy's one message saved me from building on a broken foundation for weeks.

What's the most valuable piece of user feedback you've ever received?

If you've got a sharp eye for UX leaks or edge cases, I'd love to have you try the app and tell me where you get stuck. Here is the link: app.wheeloffounders.com

posted to Icon for group Building in Public
Building in Public
on March 20, 2026
  1. 1

    The auth timing edge case is such a classic gotcha. Race conditions where everything works locally but breaks for real users with different auth states are nearly impossible to catch without someone actually using it in the wild. Smart move adding error tracking right away. Users who DM you about bugs are worth their weight in gold.

    1. 1

      Yes — the 'you know too much about how it works' problem is real. I've started doing the same: every new feature gets tested by someone who hasn't seen it before I call it done.

      Four bugs in hours is brutal, but also kind of a gift. Those first external users who actually speak up are worth more than a hundred silent ones.

      What's the most unexpected bug your first user found? I'm collecting war stories to remind myself why this phase matters.

  2. 1

    This hit home — had almost the exact same experience today.

    First external user on my tool found four critical bugs
    within hours that I'd completely missed after weeks of
    testing myself. WordPress connections failing silently,
    users getting stuck with no idea why, onboarding assuming
    everyone already knew what they were doing.

    Every single one would have killed conversion and I never
    would have spotted them because I knew too much about
    how it worked.

    Amy doing you a favour by speaking up is the exception
    not the rule — most users just quietly leave. Which is
    why I now test every single feature with someone who
    has never seen it before before I consider anything done.

    Silent failures are the most dangerous kind. Good call
    adding Sentry immediately.

    1. 1

      Exactly. The auth race condition was invisible on my machine because everything loaded in perfect order. It took someone with a slightly slower connection, or a different browser, for it to actually surface.

      I've added that to my mental checklist now: test with simulated slow connections, clear cache, random incognito windows. Anything to break my own assumptions.

      Have you found any other blind spots that only show up when real users hit them?

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 151 comments Never hire an SEO Agency for your Saas Startup User Avatar 83 comments A simple way to keep AI automations from making bad decisions User Avatar 65 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments We automated our business vetting with OpenClaw User Avatar 34 comments