3
9 Comments

Building in public: I’m realizing most problems aren’t what they seem.

One thing I’ve been noticing while building and talking to other founders:

The thing we think is the problem usually isn’t the real one.

“Need more traffic.”
“Need better ads.”
“Need more features.”

But when you zoom out, it’s often something deeper:
• unclear positioning
• weak first impression
• solving a mild problem, not a painful one
• or avoiding a hard decision

I’ve misdiagnosed my own bottlenecks more than once.

Curious what’s something in your build that you originally thought was the issue, but later realized wasn’t?

posted to Icon for group Building in Public
Building in Public
on March 2, 2026
  1. 1

    Biggest one for me was spending months adding features thinking that was the bottleneck. More integrations, more edge cases handled, more polish. But the actual problem was that people landing on the page couldn't tell what the product did within 5 seconds.

    All those features were invisible if someone bounced before understanding the core value. Repositioning the landing page around the problem instead of the feature list changed everything more than any backend improvement I'd made.

    The hard part is admitting you're building features because it feels productive, when the uncomfortable work is sitting with your messaging and being honest about whether it actually communicates.

  2. 1

    This resonates.

    For me recently, I thought the issue was the product itself, but the deeper bottleneck has been distribution and getting consistent feedback loops.

    The people who see it are interested. The challenge is making sure the right people see it regularly.

    Still figuring that part out.

  3. 1

    I’ve noticed a similar pattern - surface problems are usually execution symptoms.

    Traffic, ads, features - those are levers.

    The deeper constraint is often decision clarity at the top: what are we actually committing to, and what are we explicitly not doing?

    When that’s fuzzy, everything downstream looks like the bottleneck.

    1. 1

      That’s a really sharp way to put it.

      “Levers vs. commitment” is a big distinction. When the top level decision is fuzzy, everything below turns into experimentation without direction.

      I’ve also noticed that when teams don’t clearly define what they’re not doing, they end up diffusing effort and mistaking noise for bottlenecks.

      Curious, have you found that clarity usually comes from data, constraints, or just forcing a hard tradeoff?

      1. 1

        Data helps, but it rarely creates clarity on its own.

        I’ve found clarity usually comes when a constraint is imposed - capital, time, runway, or even a forced bet on one segment.

        Without constraint, data just multiplies options. With constraint, it sharpens conviction.

        1. 1

          I agree to that
          “Without constraint, data just multiplies options” is a powerful way to frame it. I’ve seen the same abundance of data can actually delay commitment.

          It’s interesting how often real clarity comes from a forced narrowing rather than more analysis.

          Have you ever had to impose an artificial constraint just to force a decision?

          1. 1

            Yes - sometimes the constraint can be artificial.

            I worked with a founder running four parallel growth experiments. All had “some” signal, none had conviction. We forced a 60-day constraint: one segment, one channel, one metric. Focused. Everything else paused.

            Revenue didn’t jump immediately, but decision speed did. The feedback got cleaner. Iteration got sharper. Revenue soon followed.

            Abundance creates hesitation. Scarcity creates movement I think.

  4. 1

    This is exactly it. I spent weeks thinking the problem was "AI tools are expensive." The real problem was "nobody can see how much they're actually using."

    Once I reframed it as a visibility problem, TokenBar (https://www.tokenbar.site/) basically designed itself — menu bar icon, usage meters, reset countdowns. $4.99.

    The lesson: the surface complaint is rarely the actual problem. Dig one layer deeper.

    1. 1

      That’s a perfect example.

      “AI tools are expensive” sounds like a pricing problem but it was really a visibility problem. Once the constraint was clear, the solution became obvious.

      I like that framing a lot when the problem is defined correctly, the product almost designs itself.

      Out of curiosity, how did you realize it was a visibility issue and not pricing?

Trending on Indie Hackers
Your AI Product Is Not A Real Business User Avatar 114 comments Stop Building Features: Why 80% of Your Roadmap is a Waste of Time User Avatar 70 comments I built an enterprise AI chatbot platform solo — 6 microservices, 7 channels, and Claude Code as my co-developer User Avatar 38 comments The Clarity Trap: Why “Pretty” Pages Kill Profits (And What To Do Instead) User Avatar 34 comments I got let go, spent 18 months building a productivity app, and now I'm taking it to Kickstarter User Avatar 22 comments I went from 40 support tickets/month to 8 — by stopping the question before it was asked User Avatar 19 comments