I've built multiple products over the past few years. Every time, I picked my target customer on gut feeling. "Developers." "Small businesses." "SaaS founders." Whatever sounded reasonable.
Every time, I wasted months messaging people who didn't care.
So for my latest product (a distribution framework for indie hackers), I decided to get systematic about it. Before sending a single outreach message, I built a scoring system and evaluated 6 different customer segments.
Here's how it works.
4 categories, 14 criteria, weighted scoring
Each potential ICP gets scored out of 115 points across:
Pain (max 32 pts) — Does the problem actually hurt? Pain intensity gets 3x weight because nothing else matters if they don't care enough to pay. I also score how often they're asking about the problem and whether it's the same recurring questions.
Fit (max 27 pts) — Can my product actually solve their problem? If yes, how well? Content richness (do they have enough material for my framework to work with) gets 3x weight here.
Business (max 35 pts) — Can they buy? Budget fit, decision speed, market size, and how easy they are to find and reach. A perfect customer you can't find is useless. Accessibility gets 2x weight.
Value (max 21 pts) — Is it worth it for me? Lead value, support cost, competitive gap. Some segments cost more to serve than they'll ever pay.
What the scores looked like
I evaluated 6 segments:
Indie hackers building AI SaaS: 94/115 (82%)
Technical founders pre-launch: 87/115 (76%)
Struggling SaaS founders post-launch: 81/115 (70%)
Developer tool creators: 76/115 (66%)
No-code/low-code founders: 68/115 (59%)
Product managers turning founders: 64/115 (56%)
The top ICP won on every category. High pain (they're bootstrapped and desperate for customers), great fit (they already understand AI tools), easy to reach (active on Twitter, IH, Reddit), and fast decision makers (solo founders who buy in hours, not weeks).
The bottom ICP had lower pain, was harder to find, and had way more competition from courses and consultants targeting the same group.
What happened when I started outreach
I sent 21 cold messages to the top ICP. Got a 35% reply rate and 2 sales at $39.
Small numbers, but the conversion rate validated the scoring. These people had the exact pain I was targeting, and they responded.
Without the scoring, I probably would have started with "all SaaS founders" because it sounds bigger. And I'd be sitting at a 2% reply rate wondering what went wrong.
The takeaway
Spend an hour scoring before you spend a month messaging. The framework isn't complicated — 4 categories, honest scoring on each criterion, weighted by importance. You don't need perfect scores. You need relative comparison between segments.
The 30-point gap between my top and bottom ICP is the difference between traction and silence.
How do you pick who to target? Do you score ICPs systematically, or do you go with gut feeling? Curious what's worked for others.
This resonates a lot, adding weighted criteria to ICP research was a gamechanger. Once you start assigning actual scores instead of vibes, you stop rationalizing the "exciting" segment and start following the data. Saved you from burning other months on the wrong audience. Frameworks really are the foundation of solid customer research. Without structure, you're just pattern-matching on hope.
"I evaluated 6 segments:"
Hi, did you use a prompt (and if yes, can you share it) for the ICP identifications ?
I have an AI agent that does this for me. It is part of my Distribution Framework for Claude Code
https://beyondfolder.com/distribution
It also contains competitor analysis, seo and content strategy, cold outreach templates, progress tracking and more if you want to check it out.
This is great, thanks for sharing...really clean!
Glad you liked it. Trying to create more content on this platform and share useful resources
Really like the weighted scoring approach. I've been doing something similar but way less structured — basically just a gut-feel matrix in a spreadsheet.
The 3x weight on pain intensity resonates hard. I spent way too long chasing "developer tools" as a broad ICP last year before realizing that the subset who were actively frustrated (posting about it on Twitter, asking in forums) converted at like 10x the rate of the general pool. Pain frequency is underrated too — a problem someone hits daily is worth way more than something that bugs them quarterly.
One thing I'm curious about: did you find that the scoring shifted after you started getting actual customer data? I've noticed my initial ICP assumptions usually hold directionally but the weights change. Budget fit ended up mattering less than I expected because motivated buyers find money. Decision speed mattered way more.
35% reply rate on cold outreach is nuts btw. That alone validates the whole framework.
Currently I keep scoring the same, for my products it has been spot on from the beginning
Indeed 35% reply rate was nuts, now it's closer to 27%. Still, way higher than the supposedly good 5%
The 3x weight on pain intensity is the most important insight here. Most founders (myself included) default to scoring ICPs on market size or ease of reach — both of which are about our convenience, not the customer's urgency.
One thing I'd add to your Business category: score the "decision pathway" separately from budget. Even cash-rich ICPs can be slow to buy if the decision needs 3 approvals. Solo founders score highest here precisely because there's no approval chain.
Your 35% reply rate with 21 messages is the real proof. At that signal quality, you don't need 10,000 emails — you need 200 targeted ones. Most people learn this backwards (blast → low reply rate → wonder why), so the fact that you front-loaded the scoring is the actual insight.
Indeed most people don't realize that ICP is the biggest variable you need to get right, I certaintly didn't at first
This is gold. The weighted scoring (especially Pain 3x) is exactly the rigor most founders skip.
One layer deeper I'm curious about: Your framework validates who to target (the ICP), but how do you validate what they actually want to pay for before building the full distribution framework?
I learned that even with the perfect ICP (indie hackers building AI SaaS), "they need distribution" ≠ "they'll pay $X for this specific workflow." I wasted $20K building features my ICP said they wanted, but wouldn't pay for.
Now I use video prototypes after ICP scoring: show the ICP a 3-minute demo of the solution, test if they'll pay, before coding the infrastructure.
Have you tested specific feature/price validation with your top ICP, or is the 35% reply rate + 2 sales your primary validation signal?
Would love to compare notes on "ICP scoring + demand validation" as a combined framework. Feels like we're solving complementary parts of the same problem.
Currently I have a product and it finds ICPs for that
I will try to go backwards though, to find pain first and then build a product around that
Love this systematic approach. The 3x weight on pain intensity is spot-on — so many founders optimize for market size while ignoring whether anyone actually feels the problem acutely enough to pay.
One addition I'd suggest: test the "accessibility" score before you commit. I've seen ICPs that look perfect on paper but are impossible to reach without expensive ad spend or existing network access. Your framework accounts for this with the 2x weight, but I'd validate it with a small experiment: can you actually get 10 of these people on a call in a week?
The 35% reply rate you got validates everything. Most cold outreach sees 1-5%. When you nail the ICP, the difference is dramatic — they feel like you're reading their mind because... you are.
The "content richness" factor in your Fit category is interesting. You're basically scoring whether they have enough signal to apply your distribution framework to. That's a smart product-specific criterion that most generic ICP frameworks miss.
Have you found that your top ICP (AI SaaS indie hackers) has stayed consistent as you've gotten more customers? Or are you iterating on the scoring based on who actually converts and retains?
Indeed accessibility score matters as well
I am still validating with this top ICP and then moving to others