A mistake I keep catching myself making as a builder:
When something isn’t working, my instinct is to improve it.
What I’m slowly learning is that sometimes the right move isn’t to improve it’s to pause and ask if this deserves to exist at all.
A rough pattern I’ve noticed:
If I’m spending more time explaining a feature than watching people use it, that’s a smell.
If feedback sounds like encouragement instead of commitment , that’s a smell.
If progress feels busy but direction feels fuzzy, definitely a smell.
Real progress lately has come less from building faster and more from cutting earlier.
Still figuring this out in real time but it’s changing how I think about “momentum”.
Would love to hear how others decide when to push through vs. stop and rethink.
Your "smells" are sharper diagnostics than most frameworks I've seen:
"Explaining more than watching" - this is the killer. If you have to convince users the feature is valuable, the feature isn't proving its own value. Good features generate questions about how to use them, not why they exist.
"Encouragement instead of commitment" - "I like this" vs "I'd pay for this" are completely different signals wearing similar clothes. One is politeness, the other is demand.
One heuristic I've found useful: the edit test. When users give feedback, are they asking to add things (feature requests) or remove friction (usability)? Feature requests often mean the core doesn't resonate. Friction complaints mean the core works but the execution is rough. Only the second one deserves iteration.
The hard part is emotional - improvement feels like progress even when it's not. Cutting feels like failure even when it's the smartest move. That asymmetry traps a lot of builders.
Curious what finally triggers the cut decision for you - is it a metric threshold, accumulated signals, or more of an intuition built from patterns?
This framing really clicks for me, especially the edit vs add distinction. What’s pushed me toward cutting isn’t usually a single metric, but a pattern:
when I find myself explaining a feature more than watching it get used, or when feedback stays in the “nice idea” zone without turning into real behavior.
Metrics help validate the decision later, but the signal usually shows up first in conversations and repeated friction. And you’re right cutting feels like failure in the moment, even when it’s actually removing noise so the core can breathe.
"Removing noise so the core can breathe" - that's a phrase worth keeping. It reframes cutting from loss to liberation.
The conversations-before-metrics pattern rings true. I've noticed the same sequence: you feel something is off in user calls, then analytics confirm it weeks later. The metrics are lagging indicators of what human conversations already revealed.
One thing that helps me act faster on those conversation signals: asking "if this feature disappeared tomorrow, would anyone email us asking where it went?" Not "would they notice" - but would they actively reach out. That question cuts through the politeness filter pretty fast.
The emotional asymmetry you mentioned earlier is real though. We remember the features we cut that worked out. We forget the ones we kept too long that slowly drained momentum. Survivorship bias makes cutting feel riskier than it actually is.
That question is sharp “would anyone actively reach out?” cuts through a lot of self-deception. I like how it shifts the bar from passive appreciation to actual pull.
And the point about metrics lagging conversations really resonates. By the time numbers confirm something, the team usually felt it weeks earlier hesitation, softer language, fewer follow-ups.
Survivorship bias is such a good callout too. We remember the scary cuts that worked, but not the quiet drag from features we protected for too long. Framing cutting as clearing space rather than removing value makes it a lot easier to act sooner.