17
13 Comments

Your support inbox is your product roadmap (here’s how to use it)

Your support messages are a goldmine.

They already contain the answers to what you should build next — the problem is, those answers are buried in noise.

Here’s how to extract them, with help from AI.

What tools to use for this

You’ll be using two types of tools in this workflow:

  • AI assistants like ChatGPT, Claude, or Gemini — to help you tag, cluster, score, and write tasks
  • Product research tools like Glean.io or Dovetail — to organize, search, and analyze large sets of messages

If you're working with a small number of messages, you can do everything in ChatGPT, Claude, or Gemini — just copy, paste, and prompt.

If you’re working with a bigger dataset or with a team, Glean or Dovetail can help organize everything, but you’ll still need ChatGPT (or a similar AI) to help you figure out what to fix and how to fix it.

Step 1 — Get your data ready for real analysis

AI can work with messy input, but the messier it is, the harder it is to get useful, actionable insights.

So before you do anything else, clean up the raw messages. A little structure goes a long way.

What to do:

  1. Export the last month or two of support messages — wherever they’re stored: Intercom, HelpScout, Zendesk, Gmail, Slack, Discord, app store reviews, etc.

  2. Clean them up:

  • Remove internal notes
  • Remove agent replies
  • Strip out greetings and sign-offs (“Hi,” “Thanks,” etc.)
  • Keep only the actual user message
  1. Tag 20–30 messages manually

Drop the cleaned messages into a spreadsheet. You’ll tag them here so AI can learn from it and take over later.

For every message, figure out:

  • What kind of user sent this? (New, paid, trial, churned, etc.)
  • What part of the product it’s about
  • What type of message it is (Bug, question, feature request, cancellation, etc.)
  1. Use AI to tag the rest

Once you've tagged your first 20–30 messages, you can use AI (like ChatGPT, Gemini, and so on) to speed up the rest.

Break the rest into small batches (about 20–30 at a time), and use a prompt like:

"Classify each message with:
Type of message (bug, feature request, question, etc.)
What part of the product it's about
What kind of user sent it (new, trial, paid, churned)
Whether the user sounds frustrated, confused, or just curious"

That’s all you need. Three labels per message. It doesn’t have to be perfect. It just needs to be clear enough for the next step: pattern detection.

Step 2 — Cluster messages by meaning, not keywords

People rarely say things the same way.

  • One says: “How do I cancel?”
  • Another: “Stop charging me.”
  • Another: “Close my account.”

They use different words, but they have the same problem.

You want to group these messages by what they mean, not the exact words — i.e., semantic clustering.

Here’s how:

  • Take your labeled messages from Step 1.
  • Use any tool that supports clustering by meaning — e.g., Glean.io, Dovetail, or ChatGPT Advanced Data Analysis (with a CSV).
  • Prompt: “Group these support messages by the problem they’re describing — even if the wording is different. Give each group a clear label and list which messages belong to it.”

You’ll end up with clear clusters like:

  • “Can’t cancel”.
  • “Pricing is unclear”.
  • “Users didn’t realize they had to verify their email”.
  • “Feature doesn’t work the way the users expected”.

Now, clear themes are starting to emerge — themes that you can actually build from.

Step 3 — Score and rank the problems

Once your messages are grouped, the next step is to determine which problems are worth fixing first.

Go back to your AI and ask it to go through each group and do three things:

  • Break the group into smaller issues if people are struggling for different reasons
  • Count how many users are affected
  • Describe what kind of fix each issue might need — copy, UX, feature, backend

Then, have it rank the list based on:

  • How common each issue is
  • How many paying users vs non-paying users are affected
  • How easy it might be to fix

Prompt example:

"For each group of user messages:
Break it into smaller problems if needed (e.g. people can't cancel for different reasons)
For each problem, tell me: 
    - How many messages mention it 
    - What % are from paying users
     - What kind of fix is likely: copy, UX, feature, or backend
Then, rank all the problems from highest to lowest priority based on how common they are, how many paying users are affected, and how easy they are to fix"

What you’ll get back is a sorted list of issues:

  • Quick wins
  • High-impact projects
  • Low-priority problems

This becomes the starting point for what to build next.

Step 4 — Write a task for each problem

Pick 3 to 5 of the biggest problems, then ask AI to answer the following questions:

  • What is the user trying to do?
  • What’s getting in their way?
  • What should we change?
  • Are there edge cases?
  • Write one sentence that describes the task.”

You’ll get back something like:

“Users can’t find how to cancel. The button is too hidden.
Task: Move the cancel button to the main menu. Add a confirmation.
Edge case: Don’t show it to people without the right access.”

Now you’ve got something you can build. Not an idea. A fix.

Step 5 — Make it repeatable (and maybe automated)

Once your system is in place, it only takes a few hours per month to repeat.

Here’s the monthly loop:

  1. Export the last 30 days of support
  2. Run through steps 3–4
  3. Fix 1–2 things that show up often

You can automate some parts of this — like pulling the messages or tagging them — with tools like Zapier, internal scripts, or built-in exports. But even if you do it manually, it’s fast.

That's it.

This is how to build exactly what users need and want.

on February 11, 2026
  1. 1

    The thing that surprised me most when I actually started doing this: the gap between what users ask for and what they actually need is enormous.

    I build tools for bookkeepers. Early on I kept getting requests like "can you add support for X bank format" and "can you handle Y date format." Took those at face value and spent weeks adding format after format. But when I actually clustered those messages the way you describe, the real pattern was different - people weren't asking for more formats. They were telling me the upload experience was confusing and they didn't trust the tool was reading their file correctly. The fix wasn't more parsers. It was a preview step that showed them exactly how their data was being interpreted before processing.

    Completely different fix, fraction of the effort, way bigger impact on support volume.

    One thing I'd push back on slightly - the monthly cadence might be too slow for early-stage products. When you've got fewer than 50 users, every single message carries outsized signal. I was doing this weekly (manually, just a spreadsheet) and it caught problems I would've missed on a monthly cycle. As you scale up the monthly loop makes more sense because the patterns become clearer in aggregate.

  2. 1

    The semantic clustering insight is critical. Support messages are symptoms, not diagnoses.

    When someone says "the cancel button doesn't work" they might mean: (1) they literally can't find it, (2) they clicked it but nothing happened, (3) they're frustrated about pricing and want an easy out, or (4) they expected a different cancellation flow based on past experience.

    Same words, completely different root problems requiring completely different fixes.

    The piece about "users didn't realize they had to verify their email" particularly resonates. Often what looks like a feature request ("add email verification reminder") is actually a UX clarity problem ("users don't understand they need to check email first"). The fix isn't always adding more—sometimes it's making what's already there impossible to miss.

    One addition to Step 3: weight messages from users who actually attempted the task vs users theorizing about what might be hard. Someone who tried 3 times and gave up is a stronger signal than someone who looked at the page and assumed it would be confusing.

  3. 1

    been doing similar with TellMeMo - first few support requests basically screamed "search is broken" so we reworked the entire query parser. went from 30% match rate to like 80%. curious though, how do you balance quick wins from support vs bigger strategic bets? feel like i'm always torn between fixing what users complain about today and building what they'll need in 3 months

  4. 1

    Really helpful, thanks for sharing. Im currently building a platform to track the idea progress like idea-mvp-early stage-startp, and many more. It solves exactly such pain points. Its currently in early access stage. You can join waitlist (link in my bio)
    sharing here just because I can relate to it...

  5. 1

    Love the systematic approach to turning qualitative feedback into quantitative priorities. One thing I'd add: keep a "signal-to-noise" log. Every 3-6 months, review which clusters actually led to meaningful product improvements vs. false alarms. Over time, you'll recognize which feedback patterns are genuine friction points vs. one-off noise. This meta-analysis helps you trust the process more when it counts.

  6. 1

    I actually don't have an easy way for users to request support.

    At what product stage should ease of support be included? In the MVP? With the first $1 in revenue, or the first $100?

  7. 1

    The part about manually tagging 20–30 messages before using ChatGPT really caught my attention.

    I made a huge mistake by skipping data cleanup and jumping straight into coding. As a result, I ended up with 8 months and 0 customers for my SaaS.

    Here are some tips to help you avoid similar pitfalls:

    • Always clean and structure your support data first. This makes AI tagging much easier and more accurate.
    • Start small with manual tags to train your model before scaling up. This will help you avoid making mistakes and ensure that your model is trained on the best possible data.
    • Cluster by meaning, not keywords. This will help you avoid missing similar user issues.

    How do you handle noisy or vague messages that don’t clearly fit a category?

  8. 1

    Been doing something similar with a simpler twist - I dump all support messages into a shared doc and have the whole team read through them weekly. No fancy tools, just raw exposure to user pain points. The interesting thing is how it changes what people prioritize. Engineers who see "I tried 5 times and gave up" messages suddenly care a lot more about that edge case fix. The AI categorization you describe would definitely scale better though. At what point did you find manual tagging became too time-consuming?

  9. 1

    Man, that actually makes so much sense, thanks for sharing!

    Perhaps there should be a tool which monitors your inbox, and drafts out a roadmap with high priority items (repeated feature requests, etc...)?

  10. 1

    This framework is gold for service-based businesses too! As someone building a crowd marketing service, I don't have "support tickets" in the traditional sense, but I do have consultation calls, forum interactions, and client feedback.

    The semantic clustering approach you mentioned is particularly powerful. In my niche, clients describe the same problem in wildly different ways - some say "my forum posts get deleted," others "need help with community engagement," and some just say "want more organic traffic" - but they all want authentic community presence without getting banned.

    Your Step 3 about scoring problems really resonates. I've been collecting feedback but never systematically ranked what to fix first. The idea of categorizing fixes as "copy, UX, feature, or backend" is brilliant - in my case it would be "messaging, service offering, process, or automation."

    One question: For early-stage businesses still validating their service, would you recommend starting this feedback loop even before getting your first paying customers? I'm thinking about applying this to discovery calls and trial engagements.

    Thanks for sharing this structured approach - definitely implementing this monthly review cycle!

  11. 1

    Shifting from "Ticket Volume" to "Value Volume" That's a smart strategy, Aytekin. Most people see support as a problem to solve, but you've presented it as the key to achieving product-market fit. In my view, the "Semantic Clustering" in Step 2 is the real breakthrough. Recognizing that "stop charging me" and "close my account" reflect the same emotional frustration lets you improve the user experience, not just the interface. In a time when developing features is straightforward, delivering exactly what users want is the only true edge in the market.
    Quick question, do you think this once-a-month feedback cycle is the right pace, or would a shorter loop be better for early-stage startups?
    Great share!

  12. 1

    an overlooked resource, for sure.

Trending on Indie Hackers
Never hire an SEO Agency for your Saas Startup User Avatar 99 comments A simple way to keep AI automations from making bad decisions Avatar for Aytekin Tank 67 comments I shipped a productivity SaaS in 30 days as a solo dev — here's what AI actually changed (and what it didn't) User Avatar 62 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments Are indie makers actually bad customers? User Avatar 36 comments