Hey Indie Hackers,
I’m the solo founder behind Poll-Sim (https://poll-sim.com), a web app that lets influencers, commentators, activists, and decision-makers instantly simulate how any audience would vote on their ideas, messages, or policies using AI.
The core idea is simple: instead of guessing what your target demographic thinks, you type in a poll question (or a full post), define the audience (or let the AI build realistic personas based on real demographic data), and the app runs a multi-AI simulation that “votes” just like a real poll would. No more waiting weeks or paying thousands for traditional polling.
But here’s the question I get asked the most — and the one I took very seriously on Reddit recently:
“I’m actually really curious how you tested this against real audiences. Did you compare the AI predictions with actual poll results or real-world reactions? Feels like that validation step would be the most important part here.”
(Full Reddit thread for context: https://www.reddit.com/r/ProductivityApps/comments/1swxt9n/web_app_helps_people_to_use_ai_to_predict/ — the specific comment chain started under this user’s question.)
I answered there in real time with screenshots and live examples, but I wanted to turn that raw discussion into a proper, transparent article with every source link included. This is exactly how I’m building trust in Poll-Sim’s AI simulations.
How Poll-Sim Actually Works (No Smoke and Mirrors)
Pure AI voting: Every “respondent” in the simulation is an AI agent powered by multiple models (not just one).
Real demographic grounding: I feed in objective demographic percentages and bias/trait descriptions drawn from public data.
Custom audiences: You can create your own personas or let the app generate them based on location, age, interests, political leanings, etc.
One-click simulation: Create the poll → hit simulate → get vote breakdowns, strong/lean support/oppose, comments, etc.
Full details and live demo: https://poll-sim.com
The Verification Process I’m Using Right Now
I don’t just “assume it works.” I’m actively cross-checking every major simulation against published real-world polls. I deliberately use polls I created before seeing the real results so I can’t cheat by tweaking prompts.
Here are two concrete, fully documented examples I shared on Reddit:
Example 1: Australia’s 2050 Energy Mix (Nuclear Power Support)
I ran a Poll-Sim on public attitudes toward nuclear energy in Australia’s future energy mix.
My simulation result: ~75% of the simulated Australian audience saw some role for nuclear (major or minor).
Real poll (Lowy Institute Poll): 66% of Australians see some role for nuclear by 2050 (37% major role + 29% minor role).
→ Very close match.
Source: Lowy Institute Poll – “Australia’s 2050 energy mix”
https://poll.lowyinstitute.org/charts/australia-2050-energy-mix/
Example 2: Strong Opposition to Nuclear Power
Same topic, different angle.
My simulation result: 12% strongly oppose nuclear power.
Real poll (Lowy Institute 2024 Climate & Energy Report): 17% strongly oppose Australia using nuclear power to generate electricity (out of 37% total opposition).
→ Again, a tight match on the “strong” sentiment that matters most for messaging.
Source: Lowy Institute Poll 2024 Report – Climate change and energy (published 3 June 2024)
https://poll.lowyinstitute.org/report/2024/climate-change-and-energy/
Direct links to the exact Poll-Sim runs I used for these comparisons (feel free to open and inspect them yourself):
First verification run: https://www.poll-sim.com/?share=e5e5dcad-2f0a-4965-9201-b1351aae0147
Second verification run: https://www.poll-sim.com/?share=ffd2be60-3a21-11f1-b3ec-7c1e5262dc8f
I also compared against another set of real polls where the app landed within 2–11 percentage points on strong opposition and total oppose figures (50% real → 48% simulated strongly oppose; 71% real → 82% simulated total oppose/strongly oppose). Those results are visible in the Reddit thread screenshots if you want to see the side-by-side images.
Why This Matters (And Why I’m Sharing the Raw Process)
Most AI tools in the “predict audience reaction” space stay vague about accuracy. I’m doing the opposite:
Using multiple AI models and demographic conventions instead of a single black-box.
Publishing the exact simulation links alongside the real polls.
Inviting anyone to test it themselves and post their own comparisons.
If you have a real poll you want me to simulate blindly and compare, drop it in the comments or on the app — I’ll run it live and share the results publicly.
Try It Yourself (It’s Free to Test)
Head to https://poll-sim.com, create a poll about any topic or audience you care about, and see the simulation in seconds. No login required for basic tests.
I built this because I was tired of guessing what my own audience would think. Now I’m proving it works by showing the receipts — not just claims.
Would love your feedback, brutal tests, or feature requests. Let’s make audience prediction actually reliable.
— Sammy (poll-sim)
Melbourne, Australia
All sources linked above are public and verifiable as of April 2026. I’ll keep updating this article with new validation examples as I run them.
Those figures are amazingly close TBH, considering AI models just “guess” based on the info app provides.