Honest week 12 update on the open-source PM tools I'm building nights and weekends.
The number that matters: 0 features shipped.
I set out to add AI agent coordination to the sprint planner. Instead I spent the entire week in conversations -- here on IH, in Slack communities, reading threads about what breaks when you try to manage PM workflows with AI agents.
Turns out that was the right call.
The thing I keep running into: founders building PM tools (myself included) tend to describe what the tool does, not the moment when someone desperately needs it. I was posting about my sprint planner in places where PMs were just browsing. Low conversion, low engagement, low signal.
This week I started finding threads where people were mid-frustration -- actively complaining about Jira, asking how to handle sprint planning when half your team is AI agents, trying to figure out what a PM even does when the engineers are shipping 10x faster. Different conversations entirely. No convincing needed.
"Find people who are already trying to solve the problem" is obvious advice. Living it for a week is different.
What I'm building toward: an open-source dashboard that makes AI agent output visible to PMs who aren't running the agents themselves. The sprint planner already exists (rough). The AI coordination layer is the missing piece.
Next week: ship the context file interface. Real feature, real code, real deadline.
Anyone else finding that the distribution shift (from "tell people about it" to "find people mid-problem") changes what you build too? It's affecting my roadmap.
The 'find people mid-problem' shift usually means the market you end up serving looks nothing like the one you started with. Has the target user changed too, or just what you're building? That's the hard part for nights-and-weekends builders since you can't chase two markets on limited hours.
yeah both shifted honestly. started thinking this was for solo PMs, ended up with the most useful conversations from devs who wear the PM hat and hate it. the mid-problem piece matters a lot - they weren't searching for a PM tool, they were frustrated with what they had right now. different entry point entirely
I love your style, your choice of words and I can hear and feel your frustration. I am rooting for you …
Judith
thanks Judith, means a lot. the frustration is real but it's useful frustration - it's pointing somewhere
Love this update — a “0 features shipped” week that’s actually packed with learning is so underrated. I’m very much in the same boat: I’m a builder at heart and it’s genuinely hard to stop coding and just sit in conversations, but every time I force myself to do it I end up with way sharper insights and a better roadmap. Your post is a good reminder that talking to people mid-frustration is as important as shipping the next feature.
exactly - the hard part is the coding instinct kicks in the moment you find something. conversation says "users want clearer sprint summaries", and I'm already thinking about the data model before they've finished the sentence. stopping that reflex is real work
The shift from "tell people about it" to "find people mid-problem" is something I went through recently too. I was posting about my product in all the usual places and getting polite nods. Then I started hanging out in threads where people were actively frustrated with their current workflow — completely different energy. They're not evaluating you, they're relieved someone might have an answer.
To your question about whether it changes what you build — 100%. When you're talking to people mid-frustration, you hear the actual shape of the problem, not the cleaned-up version they'd give in a survey. I ended up deprioritizing features I thought were essential and building something I never would have spec'd on my own, purely because three different people described the same pain point in almost identical words.
The 0 features shipped week feels unproductive in the moment but honestly those are usually the weeks that prevent you from building the wrong thing for 3 months. Sounds like you're in the right phase.
"cleaned-up version vs mid-frustration" - that framing is exactly it. The survey answer is already a story they have decided to tell you. The frustration is the raw material before the story gets assembled. I had someone describe a pain point to me in Slack and I realized the feature I had been building for 3 weeks solved a completely different problem than what they actually had. Redirected the whole sprint.
Mykola, your week 12 update hit home. I’m a solo dev building PRIZM, and I’ve been going through the exact same shift: moving from "shouting about my tool" to "finding people mid-bleeding."
You mentioned engineers shipping 10x faster with AI agents. Here’s the scary part I’m seeing: when shipping speed increases 10x, the speed of capital leakage also increases 10x. Most PM tools track "velocity," but they don't track the "Annualized P&L" impact of those sprints.
I launched on PH last week and got only 2 upvotes. Why? Because I stopped selling "features" and started showing founders the "Red Numbers" (Net Loss) their fast-shipping teams were actually creating. It’s an uncomfortable conversation, but like you said, it’s the only one that matters.
I’ve built a logic engine that converts sprint/growth metrics into a 12-month financial impact. I’d love to hear how you think "Financial Control" fits into the AI coordination layer you’re building.
I’m not posting a link here to respect the sub’s rules, but my profile has the prototype if you want to roast the logic.
Keep shipping (the right things).
"capital leakage increases 10x" - that is a sharper way to put it than I have heard before. Velocity without financial context is just a feel-good metric. What I keep running into when building the coordination layer: the data is there but nobody connects sprint output to actual cost. Agents can surface it but most teams are not asking the question. Send me a link - genuinely curious how you are modeling the 12-month impact.
Mykola, I'm thrilled the "capital leakage" framing resonated with you. That disconnect between sprint velocity and actual cost is exactly the gap I'm trying to bridge.
Here is the link to the logic engine: https://prizm-v2-6dohhi.flutterflow.app/homePage
If you want a quick test, try plugging in the engineering cost of a typical 2-week sprint as the "fixed cost," and see what kind of CVR lift is actually required just to break even on that specific feature.
Let me know how the math feels from a PM's perspective. Would love to get your roast on the UI/UX as well!
Tried it. The break-even CVR framing hits differently when you see the number - most teams are not doing this math before a sprint, they are doing it (if at all) in the retro. By then the spend is already gone. The UI is clean, I would push one thing: the first number you show me should be the gap, not the inputs. Show me "you need X% lift to break even" before asking me to enter anything. First impression shapes whether I trust the model. Will poke around more.
This is a great shift — but I’m curious how you’re filtering signal from those conversations.
In my experience, ‘mid-frustration’ feedback can be powerful but also noisy — some conversations reflect edge cases, others reveal repeatable problems.
Are you starting to see patterns emerge across those 20+, or still a mix of very different needs?
the mid-frustration ones are actually the most reliable signal -- they're not trying to be helpful, they're just venting, so you get the unfiltered version. for me the shift happened around conversation 8 when the same 2-3 problems started surfacing unprompted. when someone who doesn't know each other uses almost identical language to describe the same friction, that's when edge case becomes pattern. the ones i'm discounting are the polite sessions where people say everything is "interesting" -- almost zero signal there.
That’s a great way to frame it — especially the part about identical language emerging independently.
Feels like that’s the real signal — not just frustration, but consistency across different contexts.
Curious — once you identified those 2–3 patterns, did it change what you stopped building as well?
This really resonates. I've been building dev tools and had the exact same shift -- I used to write about what my tool does and post it everywhere, but the conversations that actually led to traction were the ones where I found someone mid-frustration in a forum or Slack thread, already looking for a solution.
The part about it changing your roadmap is interesting. When you talk to people mid-problem, you start hearing about the edges of the problem that you never would have spec'd out yourself. Like, for AI agent coordination, I imagine the people complaining about Jira aren't asking for "a dashboard" -- they're probably saying something more specific like "I have no idea what my AI agent committed last night." That granularity is gold for prioritization.
Curious -- of those 20+ conversations, how many led to people wanting to try what you've built so far? Or is it still more in the "gathering signal" phase?
A week where you ship zero features but have 20 real conversations is one of the most underrated weeks you'll have. The thing those conversations give you that features don't: the exact words customers use to describe their problem. That language is your marketing copy, your cold outreach subject lines, your post titles. I've gone back to notes from conversations 6 months old and used the exact phrasing. Those 20+ conversations have compounding value. Keep going.
the compounding value point is something I keep underestimating. I have notes from conversations 3 months ago where someone described their PM pain in a way I haven't been able to match myself, and I keep coming back to it. the exact phrase they used was something like "I don't need a better sprint board, I need to know if the sprint is going to fail before it starts" -- that's not something I would have written in a spec. good reminder to treat those notes as assets, not just research.