11
13 Comments

Show IH: I Built an AI Tool That Simulates Any Audience Before You Pitch Them

Hi Indie Hackers! 👋

I want to share a tool I built called Poll Sim - an AI-powered audience simulator that helps you test your ideas before pitching them to real people.

The Problem:
Ever spent weeks crafting the perfect pitch, presentation, or product launch - only to realize nobody cared? I've been there. Whether it's a startup pitch, a political speech, or a marketing campaign, we often don't know how our audience will react until it's too late.

The Solution:
I built Poll Sim (https://www.poll-sim.com) to solve this problem. You simply describe your audience and what you want to test - and AI simulates how that audience would react.

How it works:

  1. Define your audience (voters, investors, customers, etc.)
  2. Enter your message, pitch, or question
  3. Get simulated reactions with detailed reasoning

Who it's for:

  • 🎯 Politicians testing campaign messages
  • 📣 Influencers gauging content reactions
  • 💼 Entrepreneurs validating pitches
  • 🎭 Celebrities managing public perception
  • 📈 Marketers optimizing campaigns

Tech Stack:

  • Built with modern web technologies
  • Powered by advanced AI models
  • Clean, intuitive interface

I'd love to hear your feedback! What features would make this more useful for you?

Check it out: https://www.poll-sim.com

Thanks for reading! 🙏

posted to Icon for group Show IH
Show IH
on April 16, 2026
  1. 1

    tried this approach for a PM pitch last month - GPT-4, simulated 3 segments. main gap I hit: the sim tells you what they’d say, not why they’d actually change behavior. curious how you’re handling persona calibration when real feedback comes back differently?

  2. 1

    Cool idea. The value will really come down to how realistic and opinionated those audience responses feel.

  3. 1

    Interesting idea. My honest concern: the hardest part isn't simulating a reaction — it's simulating a realistic one. Audiences are messy, contradictory, and context-dependent in ways that are hard to capture with a prompt. What's your approach to avoiding the "AI sycophancy" problem where the simulator just tells people what they want to hear? I've been working on AI agent systems myself and this is one of the trickiest UX challenges — balancing useful feedback with honest (sometimes uncomfortable) feedback. If you've cracked that, this could be really valuable.

  4. 1

    Honestly, this is one of those ideas where the value makes sense right away.

    People usually don’t know whether a pitch or message is landing until it’s already out in the world, so being able to test it beforehand is pretty compelling. I can see this being useful for founders, marketers, and even people working on landing pages or outreach copy.

    My main question would be around how you think about simulation vs actual behavior. Not in a skeptical way, just because that’s probably the first thing most people will wonder. Still, even as a way to sense-check messaging before launch, it feels useful.

    Curious what kinds of use cases people are already trying with it.

  5. 1

    This is a really interesting use case — especially for testing messaging before going live.
    One thing I’ve noticed when using AI for this kind of iterative work is that things get messy surprisingly fast. You end up with multiple versions of pitches, different audience reactions, small tweaks… and it becomes hard to compare or even remember what worked.
    I’m curious how you’re thinking about that part — do you plan to help users track or organize different iterations, or is it more about getting quick directional feedback?
    Feels like that could become a pretty important piece as people start using it more seriously.

  6. 1

    Nice positioning here — the value prop is easy to get in one read.

    What stood out to me is that this could be useful not just before a launch, but also for refining messaging between iterations. I’d be curious which use case is getting the strongest pull so far: founders testing pitches, marketers testing copy, or creators testing content angles.

    Cool concept, and the demo feels immediately understandable.

  7. 1

    Hi Poll sim
    I love the idea of 'rehearsing' the audience's reaction before the curtain actually rises. It’s much better to find the dissonance in a simulation than during the live performance.

    I’m curious from a technical perspective: How do you handle 'hallucination' in the audience's reasoning? For example, if I test a niche technical pitch, does the AI maintain a consistent persona throughout the simulation?

    Great launch! I'll be keeping an eye on this one while I keep my own sentinels running.

  8. 1

    The idea of pre-validating audience reactions before going public is solid — it's basically a cheaper version of focus groups. The skepticism around AI-simulated opinions is fair (the gap between stated and actual behavior is real), but as a directional tool for shaping your messaging, this makes a lot of sense.

    One use case I'd find interesting: testing how AI-powered tools get received by non-technical audiences. I build MCP servers for Claude and the hardest part is always explaining value to people who've never touched an API. Something like Poll Sim could help figure out which framing actually lands before you write the landing page copy.

    Curious about your monetization path — free with limits, or paid from the start?

  9. 1

    Cool concept. The core value is obvious - getting signal before you commit to a real pitch or campaign.

    The part I'd push on: how are you handling the gap between simulated reactions and actual ones? AI audience models tend to reflect training data patterns, which means they'll probably nail reactions from well-represented groups and miss badly on niche or contrarian audiences. That's fine as a caveat, but it becomes a problem if someone makes a real decision based on it.

    Curious what the accuracy looks like when users come back after pitching for real - are you collecting any of that feedback to close the loop?

  10. 1

    Are the simulations backed by data on those particular markets?

    For example, if my product was AI and I simulated "Landscapers" as the audience, would it be using polls on landscapers about AI as inputs to simulating them?

  11. 1

    Interesting idea, especially for testing before committing to something publicly.

    The part I’d question is how close the simulated reactions get to real behaviour. People often say one thing and do another, especially when it comes to buying or engaging.

    Feels useful for shaping thinking early on, but harder to rely on as a signal for what will actually convert.

    Have you compared the simulated responses to real feedback yet?

  12. 1

    Using AI to simulate audience reactions is a brilliant way to shorten the feedback loop before a high-stakes launch. Testing a pitch against a 'simulated investor' could save founders weeks of trial and error.
    Since Poll Sim is all about high-stakes validation, you should enter it into this competition--“Prize pool just opened at $0. Your odds are genuinely the best they'll ever be.
    $19 entry. Winner gets a real trip to Tokyo — flights and hotel booked by us.
    Round 01 closes at 100 entries. tokyolore.com

  13. 1

    Nice idea. I tried. Very easy to create a poll and request votes. It fun to see the “theoriodical” trends within a few button clicks.

Trending on Indie Hackers
I shipped a productivity SaaS in 30 days as a solo dev — here's what AI actually changed (and what it didn't) User Avatar 305 comments I built a tool that shows what a contract could cost you before signing User Avatar 109 comments The coordination tax: six years watching a one-day feature take four months User Avatar 72 comments My users are making my product better without knowing it. Here's how I designed that. User Avatar 60 comments I changed AIagent2 from dashboard-first to chat-first. Does this feel clearer? User Avatar 33 comments Stop Treating Prompts Like Throwaway Text User Avatar 14 comments