5
14 Comments

When “improving” is actually the wrong move

A mistake I keep catching myself making as a builder:

When something isn’t working, my instinct is to improve it.

  • Add clarity.
  • Polish the UX.
  • Refine the copy.
  • Ship another iteration.

What I’m slowly learning is that sometimes the right move isn’t to improve it’s to pause and ask if this deserves to exist at all.

A rough pattern I’ve noticed:

If I’m spending more time explaining a feature than watching people use it, that’s a smell.

If feedback sounds like encouragement instead of commitment , that’s a smell.

If progress feels busy but direction feels fuzzy, definitely a smell.

Real progress lately has come less from building faster and more from cutting earlier.

Still figuring this out in real time but it’s changing how I think about “momentum”.

Would love to hear how others decide when to push through vs. stop and rethink.

on December 31, 2025
  1. 1

    This resonates a lot. I’ve found that “improving” is often a way to avoid the harder question of whether the thing actually earned another iteration.
    One rule that’s helped me: if I can’t clearly point to the moment a user gets value (not just says it sounds useful), that’s a signal to pause. When progress feels busy but activation is unclear, cutting is usually the real work.
    Still figuring out the push vs. pause line myself, but this framing around smells is spot on.

    1. 1

      You’re spot on. That “busy but unclear activation” feeling is usually the giveaway for me too. If we can’t point to a concrete moment where value actually lands for the user, adding more polish or features just delays the real decision.

      I’ve also found that framing it as pause vs. push helps pausing isn’t inaction, it’s choosing to wait for a clearer signal. Smells don’t give perfect answers, but they do tell you when it’s time to stop marching

    1. 1

      Appreciate it,glad it landed.

  2. 1

    One more heuristic that can be helpful: separate “improve” from “validate.” If next iteration doesn’t create commitment event behavior (pay, schedule a call, use it daily for a week, refer someone) it’s not improvement—it’s rehearsal. So I’ll run a 48-hour test: one change, one clear ask, and a defined “kill / keep” threshold.
    Curious: what’s your default commitment event—time, money, or repeated use?

    1. 1

      I really like that distinction “improve vs validate” is a sharp filter. For me it usually starts with time/behavior first (do they come back without a nudge?), then money later once the loop is clearer. The 48-hour kill/keep window is smart forces honesty fast.

  3. 1

    I think it comes with experience and is generally a gut feel. I am an accountant not a programmer...sometimes I need to see complete variance analysis by General Ledger code other times you can eye ball the Profit and Loss statement and know something is wrong. It just takes time to get to that stage.

    1. 1

      That’s a great analogy. The longer you do this, the faster you can tell when something’s off sometimes you need the full breakdown, other times one glance is enough. Feels very similar in product once you’ve seen enough patterns.

  4. 1

    How important would you say UX is in comparison to UI?

    1. 1

      I’d say UX matters more, but it’s harder to notice when it’s done right. UI gets the attention, UX earns the retention.

      A clean UI can get people in the door, but good UX is what makes them stick around and come back.

  5. 1

    Your "smells" are sharper diagnostics than most frameworks I've seen:

    "Explaining more than watching" - this is the killer. If you have to convince users the feature is valuable, the feature isn't proving its own value. Good features generate questions about how to use them, not why they exist.

    "Encouragement instead of commitment" - "I like this" vs "I'd pay for this" are completely different signals wearing similar clothes. One is politeness, the other is demand.

    One heuristic I've found useful: the edit test. When users give feedback, are they asking to add things (feature requests) or remove friction (usability)? Feature requests often mean the core doesn't resonate. Friction complaints mean the core works but the execution is rough. Only the second one deserves iteration.

    The hard part is emotional - improvement feels like progress even when it's not. Cutting feels like failure even when it's the smartest move. That asymmetry traps a lot of builders.

    Curious what finally triggers the cut decision for you - is it a metric threshold, accumulated signals, or more of an intuition built from patterns?

    1. 1

      This framing really clicks for me, especially the edit vs add distinction. What’s pushed me toward cutting isn’t usually a single metric, but a pattern:
      when I find myself explaining a feature more than watching it get used, or when feedback stays in the “nice idea” zone without turning into real behavior.

      Metrics help validate the decision later, but the signal usually shows up first in conversations and repeated friction. And you’re right cutting feels like failure in the moment, even when it’s actually removing noise so the core can breathe.

      1. 1

        "Removing noise so the core can breathe" - that's a phrase worth keeping. It reframes cutting from loss to liberation.

        The conversations-before-metrics pattern rings true. I've noticed the same sequence: you feel something is off in user calls, then analytics confirm it weeks later. The metrics are lagging indicators of what human conversations already revealed.

        One thing that helps me act faster on those conversation signals: asking "if this feature disappeared tomorrow, would anyone email us asking where it went?" Not "would they notice" - but would they actively reach out. That question cuts through the politeness filter pretty fast.

        The emotional asymmetry you mentioned earlier is real though. We remember the features we cut that worked out. We forget the ones we kept too long that slowly drained momentum. Survivorship bias makes cutting feel riskier than it actually is.

        1. 1

          That question is sharp “would anyone actively reach out?” cuts through a lot of self-deception. I like how it shifts the bar from passive appreciation to actual pull.

          And the point about metrics lagging conversations really resonates. By the time numbers confirm something, the team usually felt it weeks earlier hesitation, softer language, fewer follow-ups.

          Survivorship bias is such a good callout too. We remember the scary cuts that worked, but not the quiet drag from features we protected for too long. Framing cutting as clearing space rather than removing value makes it a lot easier to act sooner.

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 142 comments “This contract looked normal - but could cost millions” User Avatar 54 comments A simple way to keep AI automations from making bad decisions User Avatar 53 comments Never hire an SEO Agency for your Saas Startup User Avatar 42 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments The indie maker's dilemma: 2 months in, 700 downloads, and I'm stuck User Avatar 40 comments