9
24 Comments

Who is ChatGPT recommending to your customers instead of you? I will run this for the first 20 indie SaaS founders for free.

Last month I asked ChatGPT, Claude, Perplexity, and Gemini the exact questions people type when they are looking for a product in my category.

My product got mentioned 3 times across 47 prompts. One competitor got mentioned 31 times. Another one I had never even heard of got mentioned 19 times. That was a weird moment.

It made me realise "AM I visible in AI" is the wrong question. The real one is: who is getting recommended to my customers instead of me, and why?

So I am building a small tool that answers exactly that for indie SaaS founders. For a given landing page it runs the 30 to 50 buyer questions a real customer would ask an AI, compares the answers across ChatGPT, Claude, Perplexity, and Gemini, and shows you side by side which names keep showing up where yours should be. Then it gives you 2 or 3 fixes ranked by effort, the ones most likely to move the needle.

Before I write any code, I want to run 20 of these audits manually to see if the output is actually useful to anyone.

So here is the offer: the first 20 indie SaaS founders who drop their landing page URL in the comments get the audit for free. I will run it this week and send you back:

  • the exact prompts I tested
  • which brands showed up where you did not
  • 2 or 3 fixes ranked by effort (15 min, 1 day, 1 week kind of thing)

In exchange I would love your honest take on whether the report was actually worth your time, and what you would pay for something like this.

Drop your URL below and I will reply with an ETA.


Quick note: this is v2. I posted a first version of the idea here two days ago and got really useful feedback from @aryan_sinh and @ShellSageAI. They called out that the framing was off and the methodology too vague. They were right on both, so I rewrote it.

on April 24, 2026
  1. 3

    Ran into this from the Shopify side. merchants we audit ask why their store shows up in ChatGPT for some searches and completely disappears for others.

    Pattern from 50+ store scans: product pages with proper schema markup get cited way more consistently. it's basically a structured-data problem in disguise.
    Most merchants have zero schema on their PDPs and collection pages, so LLMs have nothing reliable to pull from.

    The thing that surprised us: conflicting review apps made it worse. stores with 2-3 review apps exporting different structured data about the same product got penalized because the model couldn't reconcile the sources.

    Dropping our profile for the free audit.

    1. 1

      Could you please send me the landing page url for your profile? Would like to avoid checking the wrong thing.

  2. 1

    This is actually a really interesting angle.

    As an indie developer, I’ve started to realize that visibility is slowly shifting from search engines to AI recommendations, and that changes the rules quite a bit. You can build a solid product, ship updates regularly, and still remain invisible if the model has never “seen” your app mentioned in enough places.

    It makes me wonder whether the real challenge now is not just building something good, but making sure it exists in the broader conversation — directories, discussions, reviews, and communities.

    Curious to see what patterns you find from these audits.

  3. 1

    This is a really clever distribution strategy. I've seen similar experiments with Perplexity, and the recommendation layer is becoming its own moat in some ways.

    For DictaFlow, we actually see the opposite problem sometimes. People find us via YouTube or Podchaser and then go straight to the App Store. The discovery point and the conversion point are totally disconnected. Are you seeing that too, or is the ChatGPT recommendation actually leading to real signups?

    1. 1

      At least from the preliminary experiments we see a tendency that the LLM recommendations increase conversion. Would wait for an increase sample size before drawing too early conclusion.

  4. 1

    The reframe from "am I visible" to "who is taking my spot" is the right one.
    I tested something similar informally — asked Claude and ChatGPT to recommend tools for a specific workflow I'm building in. Neither mentioned me. Both mentioned a tool that's been around for 4 years with decent SEO. That told me more than any keyword tool has.
    Curious what patterns you're seeing in the fixes. Is it mostly content/SEO-type stuff, or are you finding things like Reddit mentions and third-party listicles driving the AI citations more?

    1. 1

      One example, that comes more and more, is the increased weight of reddit messages compared to other platforms. I am curious if this is an temporary effect due to the current training, or if it will last also in the next years.

  5. 1

    "Am I visible in AI" → "Who is getting recommended instead of me, and why?" — that reframe is so sharp. It shifts the focus from vanity to strategy.
    Love that you're testing manually before coding. Too many tools get built on assumptions; this feels like the right way to validate.
    Quick question: when you run the audit, do you notice any patterns in why certain brands get recommended more? Is it content depth, backlinks, or something else AI seems to weight heavily?
    Rooting for this — and definitely bookmarking for when I launch my next side project 🙌

    1. 1

      Would wait for a bigger sample size before drawing conclusions on specific patterns. Once I have that will share the key learnings and recommendations.

  6. 1

    This is a really interesting angle — I hadn’t thought about it this way before. The idea of “who is being recommended instead of you” feels much more actionable than just trying to improve visibility blindly.

    I’m currently working on an early-stage SaaS, so I don’t have much traffic yet, but this makes me wonder how early founders should start thinking about this.

    Do you think this is something worth optimizing from the MVP stage itself, or only once you already have some traction and clearer positioning?

    Also curious — did you notice any patterns in why certain products get recommended more? Is it mostly content/SEO, or more about how clearly the product positioning matches the prompt?

    1. 1

      I think having this information as early as possible, even in the MVP stage, would be beneficial as it provides better awareness and gives you options you can play around with, especially around positioning.

      Regarding patterns, still collecting data and would avoid drawing early conclusions.

  7. 1

    The fantastic world of AI continues to leave us speechless...

  8. 1

    Useful angle, but the real signal is probably in the prompt set, not just the model. Would be worth testing branded, category, and problem-based queries across ChatGPT, Claude, and Perplexity, then showing not only who gets named but why. Learned the hard way that recommendation spots can flip a lot based on wording, so the methodology matters as much as the result.

  9. 1

    Interesting post.

    AI recommendation visibility is quickly becoming as important as search visibility, especially for smaller SaaS products. I’m curious whether the biggest issue you’re seeing is weak positioning, weak brand signals, or just the AI favoring more established players by default.

    Would be very interesting to hear the patterns.

  10. 1

    The 3 vs 31 stat says everything. Most founders assume they have an SEO problem when it's actually a positioning clarity problem. LLMs can't match vague messaging to specific buyer intent.

  11. 1

    That 3 vs 31 result is the part that matters most here.

    What’s interesting is it’s not just about who’s more visible.

    It’s how easy each product is to match to a range of user intents.

    When positioning is tight and specific, it shows up across more variations of the same problem.

    When it’s broader or slightly unclear, it doesn’t taper off. It just stops getting picked.

    That drop-off you’re seeing is less about ranking and more about match clarity.

  12. 1

    this feels like the next version of competitor research. Instead of only checking keywords, founders now need to check what AI tools say when customers describe their exact problem.

    1. 1

      Yes exactly. This will become a standard component of competitor research, as without it many would loose customers and revenue, or not reach their full potential.

  13. 1

    This sounds great! I've been vaguely involved with "traditional" SEO for years, but GEO is a whole new frontier and I don't even know where to start. Would love it if you could give my product a run: qria.io

    The product is very new, so I doubt it will show up at all, but I'd love to have some pointers on how to get to work on that.

    1. 1

      I will check qria.io and come back to you in the next couple of days, latest by Tuesday.

  14. 1

    This is a really smart way to think about it. Focusing on who’s getting picked instead of you makes a lot more sense than just visibility. Curious to see what patterns you find from the audits.

  15. 1

    This is a much stronger version — the shift from “am I visible” to “who is getting recommended instead of me” is the right frame.

    One thing I’d still watch:

    Right now the value is clear to you, but for someone reading quickly it still feels a bit like “submit → wait → get a report.”

    The highest-response version I’ve seen for this kind of offer usually pulls one insight forward.

    Something like:
    “Here’s one example of what showed up instead of a founder I tested — [X competitor] appeared in 70% of responses for [specific query].”

    That kind of concrete preview tends to do two things:

    makes the output feel real immediately
    reduces hesitation around dropping a URL

    Everything else in your flow already makes sense — this just removes the last bit of friction.

    1. 1

      Great advice. Thanks for your time and input.

      1. 1

        That makes sense.

        One thing I’ve noticed while looking at these audits:

        even when a product does get recommended, there’s a second layer most founders miss —
        people hesitate not because of the product, but because the brand doesn’t feel like the “default” choice.

        Especially in AI/SaaS, a lot of names blend together or feel interchangeable, so the recommendation doesn’t convert into trust.

        Curious if your audit is picking up anything like that —
        where visibility exists, but the brand still doesn’t “stick”?

Trending on Indie Hackers
The most underrated distribution channel in SaaS is hiding in your browser toolbar User Avatar 185 comments I launched on Product Hunt today with 0 followers, 0 network, and 0 users. Here's what I learned in 12 hours. User Avatar 156 comments I gave 7 AI agents $100 each to build a startup. Here's what happened on Day 1. User Avatar 98 comments How are you handling memory and context across AI tools? User Avatar 55 comments Do you actually own what you build? User Avatar 40 comments Show IH: RetryFix - Automatically recover failed Stripe payments and earn 10% on everything we win back User Avatar 34 comments