Hi Indie Hackers! 👋
I want to share a tool I built called Poll Sim - an AI-powered audience simulator that helps you test your ideas before pitching them to real people.
The Problem:
Ever spent weeks crafting the perfect pitch, presentation, or product launch - only to realize nobody cared? I've been there. Whether it's a startup pitch, a political speech, or a marketing campaign, we often don't know how our audience will react until it's too late.
The Solution:
I built Poll Sim (https://www.poll-sim.com) to solve this problem. You simply describe your audience and what you want to test - and AI simulates how that audience would react.
How it works:
Who it's for:
Tech Stack:
I'd love to hear your feedback! What features would make this more useful for you?
Check it out: https://www.poll-sim.com
Thanks for reading! 🙏
What did you use to train Poll Sim to simulate real human reactions and interactions? Seems like a tall task.
Cool concept, and the problem is real most people get feedback way too late in the process.
You’re getting a model’s best guess at what a crowd might think, which could reinforce confirmation bias rather than challenge it. The whole point of validation is to be surprised by your audience, not confirmed by them.
I have built a few tools and already feeling the weight of it all.
What is the one thing that keeps you up at nights?
Yes, I think you're touching on the more authentic use case here. ~
"Simulate any audience" sounds really strong but also sets a really high bar for belief. De-risking messaging feels much more grounded; you're not asking people to bet on a prediction, just helping them catch obvious gaffes before they're released.
The speed + honesty problem you brought up is totally real. This is where most feedback loops fail anyway: either it's too slow to be meaningful, or it gets cushioned to the point that you're no longer receiving signal.
The element of trust is where this thing either flies or dies. "This will land" is difficult to accept. But if it shows you why something will or won't land, and you can actually see the trade-offs, it feels less like an edict to follow blindly and more like a tool to think with.
Your phrasing of this as a rapid iteration loop feels much more compelling. Particularly the concept of iterating by tweaking single variables, which is how most people actually work through messaging anyway, but without feedback.
The A vs B comparison and the surfacing of negative triggers seem like high-leverage features. Same goes for the ability to define very specific audiences – broad personas are easy to simulate, the value is in those very narrow, emotionally charged instances.
Regarding usage, I suspect it'll be more of a refinement tool than a greenlighting tool. Greenlighting suggests a level of confidence that's tough to reliably provide, and the consequences of being wrong drop the trust level dramatically. Refinement is a less aspirational promise and perhaps closer to what most people actually require: help me make this clearer, sharper, less likely to whiff.
If anything, pushing away from "this will land" toward "this is where this might not land and here's why" may be more credible. The former type of feedback is usually rarer anyway.
Thank you all for the incredible feedback and thoughtful questions! 🙏
@JoaoPaulo Glad you found it useful! The goal is definitely to make idea validation more accessible.
@aryan_sinh Great point about audience specificity. You're absolutely right - the more specific the audience definition, the more valuable and actionable the simulation results become. We're actually working on features to help users create more detailed audience profiles.
@sabahattink This is a really important question. The simulation isn't meant to replace real feedback, but to help you prepare and iterate before taking that risk. Think of it as a practice round that helps you identify blind spots. The "angry customer" scenario you mentioned is exactly the kind of worst-case testing that can be really valuable.
@Danielmrdev Interesting use case with medical devices! You're right about context being crucial in regulated environments. We're exploring ways to incorporate domain-specific constraints.
@RebeccaGaskell @MikeA10 Great questions about validation. We're actively collecting data to compare simulated vs actual outcomes. Early results are promising, and we'll be sharing more data soon.
Keep the feedback coming! What specific features would make this more valuable for your use cases?
Cool space to be building in. One thing I'd push on as a fellow "LLM wrapped around a specific workflow" builder:
The hardest problem with audience simulation isn't the UI or the prompt — it's that raw LLMs produce average internet opinion, not the opinion of a specific audience. Ask it "how will 45-year-old swing voters in Ohio react to this message" and you'll get something that sounds plausible but is really just a blend of everything written about Ohio voters on the public web. Confident-sounding, often wrong, and worst of all — directionally right just often enough to be dangerous.
Two things that might help if you haven't already:
Ground the simulation in something concrete (real transcripts, survey data, focus group notes the user uploads). Otherwise the model is hallucinating a persona from vibes.
Force the model to disagree with itself — run the same prompt with 3–5 different persona priors and show the spread. A single answer feels magical but hides uncertainty; a disagreement spread is actually useful for a founder deciding what to ship.
Curious what kind of eval/feedback loop you're running to check whether the simulated reactions match real ones. That's where this stands or falls long-term.
Very interesting and certainly very useful, and it will make life easier for many...
Interesting idea — especially for founders who want a quick “reality check” before going out.
One thing that comes to mind though:
when the audience is very broadly defined (investors, customers, voters, etc.), it can sometimes make the output feel less grounded.
Feels like the value here gets much stronger when the audience is extremely specific and constrained.
Curious how you’re handling that — do you guide users into tighter audience definitions, or leave it fully open?
This resonates a lot — I'm building MailTest (email
deliverability debugger) and hit the same "simulated vs real"
gap from the other direction.
We tell users their SPF/DKIM/DMARC setup looks correct.
But "looks correct in a test" and "actually delivers to
Gmail inboxes" are very different things. The simulation
passes, real world fails.
Dennis's point about the "angry customer persona" is the
key insight here. The most valuable simulation isn't the
average case — it's the worst case. For email it's the
spam filter that's seen every trick, the ISP that's
already blacklisted your IP range.
The feedback loop idea is what I'd build toward: run the
simulation, then compare against actual outcomes, then
tighten the model. Without that loop, you're just getting
increasingly confident wrong answers.
What's your current approach to calibrating the personas
against real-world data?
I keep coming back to the same question with this kind of thing: does simulating what your audience would say actually help, or does it just make you more prepared for something that won't happen?
I work in cold outreach and we ran into a version of this problem. The LLM gives you a plausible reaction but plausible isn't the same as real. A simulated prospect "tells you" they'd reply to your email. The real person doesn't open it. The gap between what people say they'd do and what they actually do is the whole challenge.
dennis's point about the angry customer persona is spot on. The most useful simulation isn't "what would a reasonable person think" — it's "what would the worst case person do." The one who's busy, skeptical, and one second from clicking delete. If your pitch survives that, you're good.
What I'd actually find useful is a feedback loop: you send a real pitch, get rejected, and the tool tells you which of 5 possible reasons it was. That's more valuable than simulating before, because the before-sim tells you what sounds right in theory, not what actually converts in practice.
Curious if you've tested this against real outcomes yet. Like pitch A vs pitch B, did the simulator correctly predict which one would get more responses?
Interesting idea. We’ve been looking at something similar, but from a different angle, in regulated environments like medical devices. The biggest challenge we ran into wasn’t simulating the audience itself, but defining the context accurately enough. In those systems, reactions often depend on relationships between requirements, risk, and prior decisions, not just the message. Curious how your model handles context depth. Do you simulate responses based mostly on prompt description, or do you allow for some kind of structured input / history?
cool idea. the issue we hit with something similar is LLMs default to being polite. real buyers are not. if you let users tune a persona to be an actual jerk you get much better signal. angry customer persona was always the most useful one at my old company
tried this approach for a PM pitch last month - GPT-4, simulated 3 segments. main gap I hit: the sim tells you what they’d say, not why they’d actually change behavior. curious how you’re handling persona calibration when real feedback comes back differently?
Cool idea. The value will really come down to how realistic and opinionated those audience responses feel.
Interesting idea. My honest concern: the hardest part isn't simulating a reaction — it's simulating a realistic one. Audiences are messy, contradictory, and context-dependent in ways that are hard to capture with a prompt. What's your approach to avoiding the "AI sycophancy" problem where the simulator just tells people what they want to hear? I've been working on AI agent systems myself and this is one of the trickiest UX challenges — balancing useful feedback with honest (sometimes uncomfortable) feedback. If you've cracked that, this could be really valuable.
Honestly, this is one of those ideas where the value makes sense right away.
People usually don’t know whether a pitch or message is landing until it’s already out in the world, so being able to test it beforehand is pretty compelling. I can see this being useful for founders, marketers, and even people working on landing pages or outreach copy.
My main question would be around how you think about simulation vs actual behavior. Not in a skeptical way, just because that’s probably the first thing most people will wonder. Still, even as a way to sense-check messaging before launch, it feels useful.
Curious what kinds of use cases people are already trying with it.
This is a really interesting use case — especially for testing messaging before going live.
One thing I’ve noticed when using AI for this kind of iterative work is that things get messy surprisingly fast. You end up with multiple versions of pitches, different audience reactions, small tweaks… and it becomes hard to compare or even remember what worked.
I’m curious how you’re thinking about that part — do you plan to help users track or organize different iterations, or is it more about getting quick directional feedback?
Feels like that could become a pretty important piece as people start using it more seriously.
Nice positioning here — the value prop is easy to get in one read.
What stood out to me is that this could be useful not just before a launch, but also for refining messaging between iterations. I’d be curious which use case is getting the strongest pull so far: founders testing pitches, marketers testing copy, or creators testing content angles.
Cool concept, and the demo feels immediately understandable.
Hi Poll sim
I love the idea of 'rehearsing' the audience's reaction before the curtain actually rises. It’s much better to find the dissonance in a simulation than during the live performance.
I’m curious from a technical perspective: How do you handle 'hallucination' in the audience's reasoning? For example, if I test a niche technical pitch, does the AI maintain a consistent persona throughout the simulation?
Great launch! I'll be keeping an eye on this one while I keep my own sentinels running.
The idea of pre-validating audience reactions before going public is solid — it's basically a cheaper version of focus groups. The skepticism around AI-simulated opinions is fair (the gap between stated and actual behavior is real), but as a directional tool for shaping your messaging, this makes a lot of sense.
One use case I'd find interesting: testing how AI-powered tools get received by non-technical audiences. I build MCP servers for Claude and the hardest part is always explaining value to people who've never touched an API. Something like Poll Sim could help figure out which framing actually lands before you write the landing page copy.
Curious about your monetization path — free with limits, or paid from the start?
Cool concept. The core value is obvious - getting signal before you commit to a real pitch or campaign.
The part I'd push on: how are you handling the gap between simulated reactions and actual ones? AI audience models tend to reflect training data patterns, which means they'll probably nail reactions from well-represented groups and miss badly on niche or contrarian audiences. That's fine as a caveat, but it becomes a problem if someone makes a real decision based on it.
Curious what the accuracy looks like when users come back after pitching for real - are you collecting any of that feedback to close the loop?
Are the simulations backed by data on those particular markets?
For example, if my product was AI and I simulated "Landscapers" as the audience, would it be using polls on landscapers about AI as inputs to simulating them?
Interesting idea, especially for testing before committing to something publicly.
The part I’d question is how close the simulated reactions get to real behaviour. People often say one thing and do another, especially when it comes to buying or engaging.
Feels useful for shaping thinking early on, but harder to rely on as a signal for what will actually convert.
Have you compared the simulated responses to real feedback yet?
Using AI to simulate audience reactions is a brilliant way to shorten the feedback loop before a high-stakes launch. Testing a pitch against a 'simulated investor' could save founders weeks of trial and error.
Since Poll Sim is all about high-stakes validation, you should enter it into this competition--“Prize pool just opened at $0. Your odds are genuinely the best they'll ever be.
$19 entry. Winner gets a real trip to Tokyo — flights and hotel booked by us.
Round 01 closes at 100 entries. tokyolore.com”
Nice idea. I tried. Very easy to create a poll and request votes. It fun to see the “theoriodical” trends within a few button clicks.