I'm building a niche B2B tool and wanted to understand how AI search engines decide what to recommend. So I ran a simple test: same query across three engines.
"What tools help game studios track community feedback?"
ChatGPT - 13 citations. Every single one was a product page on the tool's own website. One company showed up three times (homepage + two feature pages). Zero blogs, zero Reddit, zero listicles.
Claude - 15 citations. Mostly product pages but also pulled in a couple listicles, a press release, and a how-to guide. Interestingly, it didn't search the web at first. It just answered from memory. I had to prompt it to actually search.
Perplexity - 10 citations. Completely different answer. Almost no product pages. Listicles, blog guides, and a Reddit thread where devs were asking each other what they use. The company that dominated ChatGPT? Not mentioned once.
The biggest surprise was the niche B2B angle. When you ask AI for the best TV, it serves Wirecutter or Reddit, not Samsung.com. But there's no Wirecutter for most B2B categories. The review ecosystem doesn't exist. So AI works with whatever it can find, which means your own feature pages can win by default on ChatGPT, and a single listicle mention can get you onto Perplexity.
The bad news: there's no single "AI SEO" strategy. Each engine trusts different source types.
The good news: in niche B2B, the bar is really low because almost nobody is creating the right content yet. The playbook isn't complicated; it just doesn't exist for most categories.
Planning to run this same test in 90 days after building out dedicated feature pages and some comparison content.
The pairing of 'runs locally' + 'no API keys' is undervalued positioning. It speaks to the technical buyer who has already been burned by SaaS tools that changed pricing, added rate limits, or went down at the wrong moment.
The one-time purchase model makes sense when the tool does a defined job well. What's the job this tool does?
The pairing of 'runs locally' + 'no API keys' is undervalued positioning. It speaks to the technical buyer who has already been burned by SaaS tools that changed pricing, added rate limits, or went down at the wrong moment.
The one-time purchase model makes sense when the tool does a defined job well. What's the job this tool does?
This lines up with what I’ve been seeing too and it’s less “AI SEO” and more which artifacts each engine trusts when certainty is low.
One framing that’s helped me think about it:
ChatGPT seems to default to authoritative first-party sources when a category lacks an external review ecosystem. Perplexity leans toward consensus signals (lists, forums, comparisons). Claude sits somewhere in between and often answers from priors unless pushed to search.
For niche B2B, that suggests a split strategy rather than one playbook:
– Clear, crawlable feature + use-case pages for ChatGPT
– A few high-signal comparisons or “what people actually use” posts for Perplexity-style engines
– Lightweight explainer content to shape Claude’s priors
Curious to see your 90-day rerun especially whether creating one strong third-party mention shifts Perplexity more than adding multiple internal pages.
This matches what I’ve been seeing too: “AI SEO” isn’t one thing, it’s source-type arbitrage.
A practical way I’ve started thinking about it:
For niche B2B where the review ecosystem is thin, it feels like the play is to deliberately create the missing corpus: comparison pages (“X vs Y”), use-case pages (“for game studios / for community managers”), and at least a couple credible third‑party mentions (guest posts, partner ecosystems, curated directories).
Curious what query set you’re using for the 90‑day retest — a fixed list of 20–50 buying-intent prompts? That seems like the closest thing to a benchmark.