I used to think the hardest part was deciding what to build. Turns out the harder part is knowing when to stop. Lately I’ve noticed that weak ideas don’t crash they fade. People are polite. Feedback is vague. There’s nothing clearly wrong, but nothing pulling you forward either.
I used to treat that as a signal to improve execution. Add features. Explain it better.
Now I’m experimenting with a different filter:
If none of that happens early, I’m starting to pause instead of pushing. Not sure yet if this is the right approach.
I’m still figuring this out how do you usually make that call?
This hits home. I’ve noticed that stopping sooner often isn’t about discipline — it’s about clarity.
When the feedback loop is fuzzy, effort starts feeling expensive. When the loop is tight, even small progress feels worth continuing.
Curious — was there a specific signal (or lack of one) that made you decide to stop earlier this time?
Yeah, that framing makes sense. For me it wasn’t one dramatic signal, more the absence of pull. No follow-up questions, no one trying to use it in their own context, nothing getting “pulled” forward without me pushing.
Once I noticed that pattern repeating, it felt clearer to stop instead of forcing momentum.
That distinction is powerful — absence of pull is a signal in itself. Stopping early sounds less like quitting and more like listening to reality sooner.
Exactly. Reframing it as listening instead of quitting changed how it felt for me. It’s less about giving up and more about conserving energy for things that actually want to move forward.
The thought that weak ideas don’t fail loudly but just fade feels uncomfortably true.
I’ve probablly mistaken polite feedback for progress more than once.
In hindsight, the silence and vagueness were already the signal.
Yeah, same here. Polite feedback is comforting in the moment, but it’s rarely a signal to keep pushing. Silence is uncomfortable, but it’s usually more honest. I’ve started paying more attention to who leans in, not who’s just being nice.
This is a really insightful shift in perspective. That second filter - how people reframe the problem - is something we've been watching closely while building our mental wellness app. We're using AI agents to provide personalized support, and noticing when a user describes their need in a way we hadn't considered is often more valuable than a 'polite' feature request. It's much better to fail fast than to over-engineer a solution looking for a problem!
That’s interesting. The way users explain the problem often ends up being more useful than what they ask for directly. We’ve seen that too it’s usually where the real insight shows up.
Failing fast there makes way more sense than building around assumptions.