73
47 Comments

I built an AEO audit last month. Then I realized AEO audits don't actually fix the problem.

A few weeks ago I shipped saasoffers.tech/aeo-audit a free tool that scores how "quotable" your site is for AI search. Schema markup, semantic HTML, FAQ density, the works.

People liked it. I liked it. My own site scored well.

ChatGPT still didn't cite me.

I started running my brand through the audit every week. Score went up. Citations didn't. Same story for the customers using it. The audit was telling everyone they were quotable, and AI was telling everyone there was nothing to quote them from.

So I dug into where ChatGPT, Claude, and Gemini actually pull from when they answer "what's the best X for Y?" — and the answer is depressingly consistent: Reddit. Specifically, threads where real humans are arguing about products, sharing receipts, recommending alternatives.

If you're not in those threads, no amount of schema markup gets you cited. The optimization layer can be perfect. If the source data doesn't mention you, AI can't either.

So I built AEOrank.

Drop in your URL → ~20 seconds later you see:

  • The subreddits your category is actually living in

  • Real threads from the last 30 days where you're missing

  • Sample formats showing how AI could surface you once those gaps are closed

Free to run a report. The paid plans ($1k trial, $2k/mo growth) are where my agency actually publishes the Reddit content

Try it on your own brand: aeorank.tech

Genuinely curious — does the report tell you something useful about your category, or does it feel obvious? What's missing that would make you actually pay the engagement service?

posted to Icon for SaasOffers
SaasOffers
  1. 1

    Super useful tool!

  2. 2

    Hey I like it. Consider not requiring an email to run the scan and then collecting it afterward. Might get more users if you do.

    1. 1

      Definitely agree.

    2. 1

      Fair point. The email gate is there because people were closing the tab during the 20-second scan and I wanted to deliver results either way. But you're probably right that it costs top-of-funnel volume. Going to test scan-first, email-to-unlock-full-report and see what happens. Thanks for the nudge.

      1. 1

        You might consider a different loading strategy if that's the case. For example give them content while they have a loading indicator or something. Something to keep them engaged and let them know when it will finish.

  3. 2

    This is a brutally honest pivot post. You built the thing, ran the experiment, watched the data not match your theory, and actually asked why. Respect.

    The "score went up, citations didn't" insight is key. It means the optimization layer and the citation layer are decoupled. Schema markup can be perfect. If you're not in the threads where humans argue, AI can't cite you.

    We see the same pattern with our API gateway (ChinaLLM). Developers search "OpenAI alternative" on Reddit, find comparison threads, then ask ChatGPT to summarize. If we're not in those threads, we don't show up even though our product fits the query.

    One question on the $2k/mo service: how do you handle authenticity? SaaS subreddits are good at sniffing out planted content. A downvoted thread probably hurts more than silence. Is the work genuine participation or more like seeding mentions?

    1. 1

      Honest answer: it's genuine participation, not seeding. If we tried to seed at scale we'd get caught within a month and the client's brand would take the hit, not ours. The economics don't work.

      The process is roughly: the audit surfaces threads where someone is asking a real question the client can credibly answer, we draft a reply with the client's actual context and voice, and we never lead with the product. We lead with the answer and mention the product only when it's the obvious fit. About a third of the threads the audit surfaces we don't reply to at all because the angle would feel forced.

      You're right that a downvoted thread hurts worse than silence. That's why the paid side is mostly thread selection and content strategy, not posting volume. 4 to 8 replies per month per client, not 40. Quality of fit beats quantity every time.

  4. 1

    This hits a truth most people don’t want to admit — the technical optimization layer (schema, markup, HTML structure) solves visibility, but not mentions.

    The distinction you made between “being quotable” and “being present in the source data” is the part that actually matters for AI search.

    I ran the report and the subreddit-mapping alone is surprisingly useful. Seeing where the conversations actually live vs where I assumed they were is eye-opening.

    Curious: do you think long-term AI ranking will behave more like SEO (optimize → get indexed) or more like community presence (earn mentions → get citations)?

  5. 1

    This is such an important point. Everyone is optimizing for “being picked” by AI, but not enough focus is on actually being mentioned in the first place. Distribution is the real layer here. We’ve been seeing similar patterns while helping founders structure product visibility systems at FoundersBar.

  6. 1

    Congrats on your launch! Your product looks interesting. If you ever need a SaaS explainer or promo video to showcase your product, I’d love to help. please contact me +923136201106

  7. 1

    Ran tryreleaselog.com through it. The keyword opportunity data was genuinely useful 37k monthly searches across changelog and release management terms I hadn’t fully mapped. The subreddit suggestions were way off though r/NoMansSkyTheGame and r/skyrimmods kept coming up because ‘release log’ is gaming terminology for patch notes, not just a SaaS product name. The tool struggled to separate brand name from generic term which is probably a common edge case for products with descriptive names. The r/ExperiencedDevs result was the one legitimate hit. The core insight still lands though. I have zero Reddit presence in r/SaaS and r/indiehackers where my actual category lives, and that’s exactly why AI won’t cite me yet. The audit didn’t surface the right communities but it confirmed the problem is real.

    1. 1

      Glad the keyword data landed. The "your category lives in r/SaaS and r/indiehackers but you're absent there" insight is the whole point, and you got there even with noisy subreddit output. Will ping when the fix is live.

      1. 1

        Appreciate it and yes the noisy subreddit output actually made the core insight clearer not murkier. When the tool surfaces r/NoMansSkyTheGame for a SaaS product it's obvious something is off, which means you have to actually think about where your category lives rather than just trusting the output. That's probably more useful than a clean list that gives false confidence. Will be watching for the fix the brand name versus generic term disambiguation is a real edge case that probably affects more products than people realize.

  8. 1

    This is a brutally honest pivot post. You built the thing, ran the experiment, watched the data not match your theory, and actually asked why. Respect.

    The "score went up, citations didn't" insight is key. It means the optimization layer and the citation layer are decoupled. Schema markup can be perfect. If you're not in the threads where humans argue, AI can't cite you.

    We see the same pattern with our API gateway (ChinaLLM). Developers search "OpenAI alternative" on Reddit, find comparison threads, then ask ChatGPT to summarize. If we're not in those threads, we don't show up even though our product fits the query.

    One question on the $2k/mo service: how do you handle authenticity? SaaS subreddits are good at sniffing out planted content. A downvoted thread probably hurts more than silence. Is the work genuine participation or more like seeding mentions?

  9. 1

    Nice information, Like it

  10. 1

    Strong insight. We’re seeing the same in our scans: schema can improve structure, but citations move only when brands start appearing in real discussion ecosystems. Have you noticed which subreddit signal (mentions vs upvotes vs recency) correlates most with later AI citations?

    1. 1

      Recency matters more than I expected. A 30-day-old thread with 5 mentions outperforms a 2-year-old thread with 50, retrieval leans fresh.

      Engagement depth beats upvotes. 8 upvotes with 40 substantive replies gets cited more than 200 upvotes with 4 replies. Models seem to weight "real conversation" over "popular."

      Mentions inside replies beat mentions in the original post. Recommendations read as signal, self-mentions read as pitch.

  11. 1

    Honestly this is the most honest pivot post I've seen in a while. You built the thing, watched it not work, and actually asked why instead of just adding more features to it. Respect.

    The Reddit insight tracks. I've noticed the same thing - when I ask ChatGPT about tools in my space, it's basically just summarizing a 2-year-old r/entrepreneur thread.

    My only hesitation with the paid side: how do you avoid the content feeling planted? Subreddits in the SaaS/tools space are pretty good at sniffing that out, and a downvoted thread probably hurts more than silence. Is the $2k/mo work more like genuine participation or more like seeding?

    Would run the report either way. Already curious what subreddits it spits out for my category.

    1. 1

      Genuine participation, not seeding. The economics don't work otherwise, planted content gets sniffed out fast and the client's brand takes the reputation hit, not ours.

      The short version: audit finds threads where the client can credibly answer a real question, we draft with their actual context, they post from their own account, we never lead with the product. About a third of the threads the audit surfaces we skip because the angle would feel forced. Volume is 4 to 8 replies a month per client, not 40. Quality of fit beats quantity.

  12. 1

    Love that you didn’t just stop at the observation, but built AEOrank to solve the actual root problem: mapping exactly which subreddits and recent threads your niche lives in, and highlighting the gaps where your brand is completely missing. Running a free report feels immediately actionable, not just theoretical vanity metrics.

    The shift from generic AEO audits to Reddit signal tracking is exactly the next wave of AI search growth. Most builders are still only focused on traditional SEO, so this fills a huge underserved gap. Curious to dig deeper into the thread gap analysis, and I think the paid agency service makes total sense—most founders don’t have the time to consistently engage niche Reddit communities the right way without spamming. Great build, great problem solved!

    1. 1

      Appreciate this. The "fewer audits, more thread visibility" framing is exactly how I'd pitch it now if I were rewriting the post. Most builders treat AI search as an SEO problem when it's really a presence problem, different muscle, different tools.

      If you run the report on your own brand and the thread gap analysis surfaces something useful (or something obviously wrong), would love to hear about it.

  13. 1

    awesome tool. Thanks mate

  14. 1

    An AEO audit can highlight issues, but it doesn’t fix them on its own—it’s just the first step. The real improvement comes from implementing changes like optimizing content, improving structure, and aligning with user intent.

    Just like audits need action to deliver results, staying informed about opportunities like can help you take practical steps toward growth and better outcomes.

    1. 1

      Agreed, the audit is diagnosis not treatment. The whole point of the post is that I had to build the second tool to do the actual fixing.

  15. 1

    Living this right now. I launched a Chrome extension 3 days ago and the first thing I did was start commenting in relevant Reddit threads - not pitching, just answering questions where my experience as a builder was useful.

    The "score went up, citations didn't" pattern is real. I have vs-pages, schema markup, blog posts - all the on-page stuff. But the only thing that actually drives profile clicks is genuine Reddit activity.

    Interesting to see someone productizing the gap between "optimized" and "mentioned." Ran a report on my site - curious to see if the subreddit recommendations match where I've been manually finding traction.

    1. 1

      This is exactly the use case I built it for. Manual Reddit work is the right move early, you learn the communities and build account history. The audit is most useful as a sanity check plus surfacing 2 or 3 subreddits you missed.

      Real test when you run it: how many surfaced subreddits were already on your list, and how many are new but actually relevant? If it's mostly overlap, the tool just confirms what you know. If it surfaces new ones that pass your sniff test, that's the value.

  16. 1

    Interesting shift from “optimize your site” to “optimize your presence where opinions are formed.” Feels like off-page SEO just evolved into community-driven signals.

    1. 1

      That's the cleanest framing I've heard. Off-page SEO just shifted surfaces, backlinks from authority sites became mentions in places where opinions are formed. Same logic, different terrain. The tools haven't caught up to the shift yet, which is the whole opening.

  17. 1

    The "score went up, citations didn't" frame is one of the cleanest reframings I've read this month. I've been running my own indie launch experiments (a small iOS memo app for the Captio-refugee niche) under the same theory: spending an hour replying in two specific subreddits has moved my brand mentions inside ChatGPT measurably more than three weeks of on-page schema cleanup. The deeper lesson I keep relearning is that optimizing the visible metric only matters if the metric and the actual citation source agree — and right now AEO docs and AI ground truth disagree pretty hard. Would love to swap notes with anyone running the same Reddit-first playbook. Question — in your data, is there a recommendation-density threshold per thread above which citation pickup jumps non-linearly?

    1. 1

      Bigger variable than density is thread metadata. A "what should I use for X" question gets cited more than a "show off your stack" thread at identical density. Models treat the former as decision-context and the latter as community chatter.

      Your hour-in-two-subreddits beating three weeks of schema cleanup is the cleanest single data point of this whole thesis. If you're game, would love to compare your manual playbook against what AEOrank surfaces for your category. Captio-refugee is niche enough that signal should be easy to spot.

  18. 1

    I gave it a try and SEO part was great and it looks rather useful. The recommended subreddits on the other hand were accidentally hilarious. I thought it was great! I might even use some of them in my advertising in some twisted way in the future. FYI my web application is for website monitoring/e2e testing and the recommended subreddits were:

    r/Monitors - which 4k monitor to buy?
    r/Parents - monitor your child
    r/FelineDiabetes - monitor glucose
    r/PPC - monitor Google ad campaigns
    r/ITCareerQuestions - employee monitoring
    r/ClaudeAI - token monitoring
    r/ sharepoint - monitor shared folders
    r/microsaas - people who build things visit it (maybe a match)

    I understand it locked onto the "monitoring" only, which makes me wonder if I should be more specific on my website itself too. It's such a wide category otherwise. So, overall, I think it's fairly useful, but not sure if useful for the main use-case proposed in my case without being able to modify the main keyword/concept. AIs when asked tend to answer correctly what my website is about, so maybe it's just generalizing a little bit too far.

    1. 1

      Shipping a fix this week. Two things changing: the extractor will preserve multi-word phrases as a unit when they form a recognized category, and you'll be able to override the extracted concept before the search runs. So you'd type "uptime monitoring and end-to-end testing for web apps" and confirm the concept as "website monitoring" before it pulls subreddits. Going to use your case as the test case if that's ok, will DM when it's live.

      Your instinct on being more specific on the site itself is also right, but for a different reason. The models indexing your site face the same disambiguation problem the audit does. If your homepage leads with "monitoring" without "website" or "uptime" qualifying it in the first 100 words, you're competing for citation against r/Monitors regardless of what the audit recommends.

  19. 1

    Sounds great!
    Would be awesome to see some real results from real users!

    1. 1

      Working on a public case study with one of the agency clients now, will post it here when it's live

  20. 1

    The wonderful world of AI continues to surprise everyone... Getting better every day, with many more surprises, what else is coming next??? Good luck on the journey ahead...

    1. 1

      Thanks, appreciate it.

  21. 1

    Wow, realizing that AI models pull straight from messy Reddit threads instead of perfectly polished SEO markup is such a massive eye-opener! It totally makes sense though, since raw human debates hold way more genuine market signal than basic schema tags. Pivoting to actually surface those hidden community gaps is a brilliant, practical way to validate real demand fast. The premium pricing makes total sense too, since authentically engaging in those threads is the actual bottleneck. Skipping the over-engineered SEO and going straight to the source always wins!

    1. 1

      Thanks Lily. The follow-on question I keep circling is whether AEO audits have any value at all, or whether they're just noise. My current take is they're useful as a hygiene check (fix obvious schema breakage, make sure your pages aren't blocked from crawlers), but the marginal score from 70 to 90 doesn't move citations. The work that does move citations is somewhere else entirely.

      1. 1

        I totally agree audits are just the "permission to play," not the "strategy to win." A perfect score simply means you aren't technically broken, but the AI won't cite you unless the community is already talking. It’s like having a clean storefront; it’s necessary to look professional, but it’s the buzz on the street that actually brings people inside!

        Besides Reddit, what’s the next "messy" corner of the web you think AI is starting to trust?

  22. 1

    This is awesome, Reddit is one of the main platforms cited by the LLMs for things like this, so getting your brand mentioned there in relevant threads can actually make a huge difference. I'm launching a tool that addresses this AI visibility issue from the other end (SEO/AEO long-form content published consistently to boost presence and close any gaps from the audit). I think that your approach with the Reddit publishing is also really essential for efficient growth, with the closed loop audit<-->content being a major differentiator. Super cool. Good luck!!

    1. 1

      The closed loop is the right way to think about it. Audit shows the gap, content fills it, repeat. Most tools in this space stop at the diagnostic and leave the doing to the customer, which is why scores go up and citations don't.

      Long-form content and Reddit presence actually compound when you run them together. The blog post gives the model something structured to cite, the Reddit thread gives it the human reasoning behind why anyone would pick you. Models pull from both layers when answering comparison queries.

      Send me a link when you launch, would be useful to see how you're handling the publishing cadence side. That's the part most founders can't sustain alone.

  23. 1

    The audit insight is spot on — structure doesn’t matter if you’re not in the source layer.

    What you’re seeing with Reddit is exactly where AI is pulling decision signals, not just information.

    I’ve been noticing a similar pattern from a different angle:
    comparison queries like “X vs Y” behave less like SEO and more like intent resolution moments.

    By the time someone (or AI) asks that, discovery is already done — it’s about choosing.

    Reddit threads--> raw opinions
    Comparison pages --> structured decisions

    Feels like the real opportunity is owning both layers:
    be present where people argue, and where they finalize the choice.

  24. 1

    I ran into the same thing — my “optimized” pages looked perfect but zero mentions anywhere real people talk.
    Once I started showing up in Reddit discussions and quora , that’s when things actually moved — this direction makes way more sense 👍

  25. 0

    "100% agree. Audits fail when they ignore business impact. I built BurnCheck to show the literal 'Annual Waste' in dollars—seeing a $1,200/yr loss makes the ROI of switching models undeniable.

    Check it out: burncheck.github. io/ burncheck/

    (Please copy-paste, I can't post links yet!)"

  26. 0

    Title: Built a real-time crypto arbitrage scanner — 2 months in, $X MRR

    Body:

    Started building this 2 months ago. It watches 5 exchanges via WebSocket and

    alerts you on Telegram when there's a profitable spread (after fees).

    Stack: Python/FastAPI, React, PostgreSQL on a single Hetzner VPS.

    Live at — 30-day free trial. arb-signal .com

    Some learnings:

    - Hardest part wasn't the code, it was payment integration

    - Dodo Payments was easier than Stripe for non-US founders

    - WebSocket vs REST polling: 100x less load on exchanges

    Open to feedback / questions.

  27. -1

    Something I wish someone had told me before I wrote 60 articles: content alone is not a funnel.

    We published 60+ guides for AI coding tools — Cursor rules, CLAUDEmd patterns, stack-specific guides. Got ~500 views across them. Zero email subscribers until we added a capture form. Zero sales.

    The content was fine. The distribution was fine. The gap was between 'someone reads an article' and 'someone has a reason to come back and buy.'

    What finally clicked: your free content needs to answer a question that makes your paid product the obvious next step. Not vaguely related — directly related. The best free sample we published was a single-stack Cursor rules file. The product it leads to is the full multi-stack pack. The conversion path is obvious.

    Building in public lesson: write fewer, more targeted articles. Each one should end with a clear, logical next action for the reader — not 'check out my product' but 'here is the next piece you need, and we have it.'

    Has anyone else found a specific trigger that converted readers into first buyers?