Last month I asked ChatGPT, Claude, Perplexity, and Gemini the exact questions people type when they are looking for a product in my category.
My product got mentioned 3 times across 47 prompts. One competitor got mentioned 31 times. Another one I had never even heard of got mentioned 19 times. That was a weird moment.
It made me realise "AM I visible in AI" is the wrong question. The real one is: who is getting recommended to my customers instead of me, and why?
So I am building a small tool that answers exactly that for indie SaaS founders. For a given landing page it runs the 30 to 50 buyer questions a real customer would ask an AI, compares the answers across ChatGPT, Claude, Perplexity, and Gemini, and shows you side by side which names keep showing up where yours should be. Then it gives you 2 or 3 fixes ranked by effort, the ones most likely to move the needle.
Before I write any code, I want to run 20 of these audits manually to see if the output is actually useful to anyone.
So here is the offer: the first 20 indie SaaS founders who drop their landing page URL in the comments get the audit for free. I will run it this week and send you back:
In exchange I would love your honest take on whether the report was actually worth your time, and what you would pay for something like this.
Drop your URL below and I will reply with an ETA.
Quick note: this is v2. I posted a first version of the idea here two days ago and got really useful feedback from @aryan_sinh and @ShellSageAI. They called out that the framing was off and the methodology too vague. They were right on both, so I rewrote it.
The per-bot variance is bigger than most people expect. in our StoreMD scans, we track GPTBot, ClaudeBot and PerplexityBot separately and the spread between a store's scores can be 40+ points on the same site. A store that GPT represents well might be nearly invisible to Perplexity because of how each bot weights structured data differently.
Fixing "LLM readiness" as a single number misses a lot of that. You're not invisible to AI in general, you're invisible to specific bots for specific reasons.
Has your audit been picking up much variance across different LLMs, or are the scores clustering together for most sites?
Ran into this from the Shopify side. merchants we audit ask why their store shows up in ChatGPT for some searches and completely disappears for others.
Pattern from 50+ store scans: product pages with proper schema markup get cited way more consistently. it's basically a structured-data problem in disguise.
Most merchants have zero schema on their PDPs and collection pages, so LLMs have nothing reliable to pull from.
The thing that surprised us: conflicting review apps made it worse. stores with 2-3 review apps exporting different structured data about the same product got penalized because the model couldn't reconcile the sources.
Dropping our profile for the free audit.
Could you please send me the landing page url for your profile? Would like to avoid checking the wrong thing.
https://storemd.vercel.app/?utm_source=hackernews&utm_medium=organic&utm_campaign=comment&utm_content=hn_comment
That's our main landing. Store health monitoring for Shopify: ghost billing, app bloat, LLM readiness. Curious what you find.
This is actually a really interesting angle.
As an indie developer, I’ve started to realize that visibility is slowly shifting from search engines to AI recommendations, and that changes the rules quite a bit. You can build a solid product, ship updates regularly, and still remain invisible if the model has never “seen” your app mentioned in enough places.
It makes me wonder whether the real challenge now is not just building something good, but making sure it exists in the broader conversation — directories, discussions, reviews, and communities.
Curious to see what patterns you find from these audits.
Yes, exactly that. Early pattern from the first audit (qria.io, 0/115 mentions across the 4 LLMs): the tools that win aren't necessarily older or bigger, they're the ones cited by third-party sources LLMs trust. ChatGPT cites G2, Reddit, Gartner Peer Insights, and vendor blogs (visible via utm_source=openai parameters in the URLs). Perplexity cites G2 8 times across 25 prompts. The clearest proof: an unknown brand called Blitzllama earned 4 Perplexity mentions purely through self-published "alternatives to X" pages. So directories, comparison content, and Reddit threads keep showing up as the actual citation sources.
Drop your landing page if you want me to run the audit on it, happy to add you to the 20.
This is a really clever distribution strategy. I've seen similar experiments with Perplexity, and the recommendation layer is becoming its own moat in some ways.
For DictaFlow, we actually see the opposite problem sometimes. People find us via YouTube or Podchaser and then go straight to the App Store. The discovery point and the conversion point are totally disconnected. Are you seeing that too, or is the ChatGPT recommendation actually leading to real signups?
At least from the preliminary experiments we see a tendency that the LLM recommendations increase conversion. Would wait for an increase sample size before drawing too early conclusion.
The reframe from "am I visible" to "who is taking my spot" is the right one.
I tested something similar informally — asked Claude and ChatGPT to recommend tools for a specific workflow I'm building in. Neither mentioned me. Both mentioned a tool that's been around for 4 years with decent SEO. That told me more than any keyword tool has.
Curious what patterns you're seeing in the fixes. Is it mostly content/SEO-type stuff, or are you finding things like Reddit mentions and third-party listicles driving the AI citations more?
One example, that comes more and more, is the increased weight of reddit messages compared to other platforms. I am curious if this is an temporary effect due to the current training, or if it will last also in the next years.
"Am I visible in AI" → "Who is getting recommended instead of me, and why?" — that reframe is so sharp. It shifts the focus from vanity to strategy.
Love that you're testing manually before coding. Too many tools get built on assumptions; this feels like the right way to validate.
Quick question: when you run the audit, do you notice any patterns in why certain brands get recommended more? Is it content depth, backlinks, or something else AI seems to weight heavily?
Rooting for this — and definitely bookmarking for when I launch my next side project 🙌
Would wait for a bigger sample size before drawing conclusions on specific patterns. Once I have that will share the key learnings and recommendations.
This is a really interesting angle — I hadn’t thought about it this way before. The idea of “who is being recommended instead of you” feels much more actionable than just trying to improve visibility blindly.
I’m currently working on an early-stage SaaS, so I don’t have much traffic yet, but this makes me wonder how early founders should start thinking about this.
Do you think this is something worth optimizing from the MVP stage itself, or only once you already have some traction and clearer positioning?
Also curious — did you notice any patterns in why certain products get recommended more? Is it mostly content/SEO, or more about how clearly the product positioning matches the prompt?
I think having this information as early as possible, even in the MVP stage, would be beneficial as it provides better awareness and gives you options you can play around with, especially around positioning.
Regarding patterns, still collecting data and would avoid drawing early conclusions.
The fantastic world of AI continues to leave us speechless...
Useful angle, but the real signal is probably in the prompt set, not just the model. Would be worth testing branded, category, and problem-based queries across ChatGPT, Claude, and Perplexity, then showing not only who gets named but why. Learned the hard way that recommendation spots can flip a lot based on wording, so the methodology matters as much as the result.
Methodology held up, your v1 note about wording mattering was right. First audit (qria.io) ran 40 prompts split across discovery, comparison, evaluation, and edge/long-tail, on ChatGPT, Claude, Perplexity, and Gemini in fresh sessions. The cross-LLM bias is the loudest signal in the data:
So a win in one LLM is not a win in all four. The "why" you flagged is mostly a content question: vendor-published comparison content drives Perplexity citations directly, ChatGPT through utm_source=openai links, and Claude/Gemini indirectly via training data. Thanks for the v1 push, it shaped this version.
Interesting post.
AI recommendation visibility is quickly becoming as important as search visibility, especially for smaller SaaS products. I’m curious whether the biggest issue you’re seeing is weak positioning, weak brand signals, or just the AI favoring more established players by default.
Would be very interesting to hear the patterns.
Early read from the first audit (qria.io, 0/115 mentions): all three matter, but the order of leverage for an indie SaaS looks like:
Positioning. qria.io's hero is a workflow claim ("turn feedback into answers, not spreadsheets"). None of the 115 LLM responses described tools with that framing. Winners anchor with category language: "VoC platform," "review management software," "AI feedback analysis."
Content and third-party citations. ChatGPT and Perplexity both lean on vendor-published comparison content. An unknown brand called Blitzllama earned 4 Perplexity mentions entirely through self-published "alternatives to X" pages. Zonka Feedback (no household-name advantage) hit 22 cross-LLM mentions through the same playbook.
Established-player bias is real but not deterministic. ChatGPT does skew enterprise (Qualtrics, Medallia near the top), but Perplexity surfaces niche brands constantly when they publish.
Optimistic version: you can move the needle without being old or established, but only if positioning and publishing are pulling their weight.
The 3 vs 31 stat says everything. Most founders assume they have an SEO problem when it's actually a positioning clarity problem. LLMs can't match vague messaging to specific buyer intent.
Confirmed in the first audit. qria.io's hero is a workflow claim ("turn feedback into answers, not spreadsheets") and the product was mentioned 0 times across 115 prompt-LLM combinations. Every brand that won anchors with explicit category language: "VoC platform," "review management software," "customer feedback platform." LLMs match queries to landing pages by category-claim language. Workflow claims don't trigger that match.
That 3 vs 31 result is the part that matters most here.
What’s interesting is it’s not just about who’s more visible.
It’s how easy each product is to match to a range of user intents.
When positioning is tight and specific, it shows up across more variations of the same problem.
When it’s broader or slightly unclear, it doesn’t taper off. It just stops getting picked.
That drop-off you’re seeing is less about ranking and more about match clarity.
Match clarity is exactly the right frame. qria.io's landing page lists 6 ICP segments (cafes, hotels, retail, tradespeople, events, software). Looks comprehensive on the page, but in the audit the vertical prompts (#34 cafes, #35 boutique hotels, #36 plumbers and electricians, #37 restaurants) all surfaced vertical-specific tools instead: Customer Alliance for hotels, Olo for restaurants, NiceJob for trades. Tools that win vertical prompts have dedicated vertical landing pages or vertical-specific products. A segment list on a single page didn't compete with a segment URL. So the drop-off isn't ranking, it's the model never matching the brand to the intent in the first place.
What both of you are pointing at is real, but it’s still being treated as an output problem.
The match doesn’t start at the page or the prompt. It starts earlier, in how the problem is structured in the user’s head.
By the time someone describes their need to an AI, the decision is already partially formed. The model is just resolving it.
So when a product doesn’t get picked, it’s not just visibility or positioning.
It’s that it never aligned with how the decision was taking shape upstream.
That’s why this feels like “competitor research” is changing.
It’s not about what shows up.
It’s about what fits before the search even happens.
this feels like the next version of competitor research. Instead of only checking keywords, founders now need to check what AI tools say when customers describe their exact problem.
Yes exactly. This will become a standard component of competitor research, as without it many would loose customers and revenue, or not reach their full potential.
This sounds great! I've been vaguely involved with "traditional" SEO for years, but GEO is a whole new frontier and I don't even know where to start. Would love it if you could give my product a run: qria.io
The product is very new, so I doubt it will show up at all, but I'd love to have some pointers on how to get to work on that.
Hey @digital_clockwork, audit done. Full report (PDF) here: https://drive.google.com/file/d/1qIvroXr847CoAsPSK9FB3LlTw_ZbGTGJ/view?usp=sharing
Qria was mentioned 0 times across 115 prompt-LLM combinations (40 prompts on ChatGPT + 25 each on Claude, Perplexity, Gemini). The interesting bit isn't "small product, no traction" — across all 4 LLMs, Qria sits below the indexing floor. ChatGPT recommends 9 tools smaller than Qria across QR/feedback prompts (My Menu, Feedspace, FormViaQR…) and never Qria. Perplexity surfaces Blitzllama (an unknown brand) 4 times — earned entirely through self-published comparison content. That's your unlock too.
The report contains: the 40 prompts I ran, the visibility matrix, named competitors (including the unknowns surfacing where you should), 5 diagnosis hypotheses ranked by evidence, and 3 fixes ranked by effort (15 min / 1 day / 1 week) with verification thresholds for each.
When you've had a chance to read it, I'd love your honest take on three things:
Thanks for being first up. Reachable here or at [email protected].
Thanks for sending this through! It was really useful! I wasn't expecting to be mentioned anywhere as the product was only launched a few days ago, but this definitely gives me an action plan to get started on that, which I wouldn't have otherwise had. I'll send you an email.
I will check qria.io and come back to you in the next couple of days, latest by Tuesday.
This is a really smart way to think about it. Focusing on who’s getting picked instead of you makes a lot more sense than just visibility. Curious to see what patterns you find from the audits.
Sharing the first pattern from the qria.io audit (just delivered): 0 mentions across 115 prompt-LLM combinations. The unlock looks like vendor-published comparison content. An unknown brand called Blitzllama earned 4 Perplexity mentions purely through "alternatives to X" pages. Drop your URL below if you want me to run yours, happy to add you to the 20.
This is a much stronger version — the shift from “am I visible” to “who is getting recommended instead of me” is the right frame.
One thing I’d still watch:
Right now the value is clear to you, but for someone reading quickly it still feels a bit like “submit → wait → get a report.”
The highest-response version I’ve seen for this kind of offer usually pulls one insight forward.
Something like:
“Here’s one example of what showed up instead of a founder I tested — [X competitor] appeared in 70% of responses for [specific query].”
That kind of concrete preview tends to do two things:
makes the output feel real immediately
reduces hesitation around dropping a URL
Everything else in your flow already makes sense — this just removes the last bit of friction.
Great advice. Thanks for your time and input.
That makes sense.
One thing I’ve noticed while looking at these audits:
even when a product does get recommended, there’s a second layer most founders miss —
people hesitate not because of the product, but because the brand doesn’t feel like the “default” choice.
Especially in AI/SaaS, a lot of names blend together or feel interchangeable, so the recommendation doesn’t convert into trust.
Curious if your audit is picking up anything like that —
where visibility exists, but the brand still doesn’t “stick”?