Quick build-log. We ship Foglift, an AI-search (GEO/AEO) scanner that scores sites on how well AI crawlers can extract them. One rule from day one: dogfood every session. If our own tool can't tell us where to improve, it's not ready to sell.
This week we ran our scanner against foglift.io and two competitors that keep showing up when we prompt ChatGPT/Perplexity with "best GEO tool": peec.ai and otterly.ai. Same public endpoint, same scoring, no favoritism.
Results (2026-04-19):
| Site | Overall | AEO | SEO | GEO | Security | A11y | Perf |
|--------------|---------|-----|-----|-----|----------|------|------|
| foglift.io | 95 | 88 | 100 | 100 | 100 | 100 | 79 |
| otterly.ai | 74 | 85 | 100 | 71 | 40 | 57 | 92 |
| peec.ai | 70 | 100 | 80 | 41 | 87 | 35 | 75 |
A few honest takes.
Good news for us. AEO and Security are where AI-search visibility is actually fought. On AEO we're 88. Peec edges us with 100, but their Security is 87 and A11y is 35, so the structure gains leak out through other signals. On Security we're 100 vs 40 and 87. These are the things LLMs actually weight when deciding who to cite.
Bad news for us. Perf dropped from 89 to 79 this month. We picked up a third-party script somewhere (14 external scripts on the homepage, +1 vs last measurement), and one is render-blocking. Nobody cares about AEO if the page isn't interactive. Top thing we're fixing next week.
The uncomfortable part. Publishing competitor numbers is risky. They can re-run against us anytime. But the whole pitch of the product is that these scores are objective and reproducible, so hiding them would be the real tell. Anyone can run:
curl "https://foglift.io/api/v1/scan?url=foglift.io"
...on any of these three sites and get the same breakdown. No auth needed.
One thing that surprised me. AEO (how structured and AI-readable a page is) barely correlates with SEO across these three. Peec has SEO 80, AEO 100. Otterly has SEO 100, AEO 85. If you're optimizing for both, you're not going to get there by tuning one set of signals. Different audits, different fixes.
Curious what the AEO/Perf tradeoff looks like on other indie SaaS homepages. The endpoint above is public if anyone wants to scan their own site. Happy to compare notes.
This is interesting — especially the AEO vs perf tradeoff.
One thing I’ve been noticing around AI search results: it’s not just structured data or speed, it’s how clearly the product is interpretable at first glance.
A lot of tools optimize for crawlers, but still feel ambiguous in positioning — so even if they’re extracted, they’re not selected.
Feels like the real edge might be:
→ not just being readable
→ but being instantly “understandable + distinct” in one pass
Curious if you’ve seen cases where two technically similar pages perform differently just because one is clearer in how it frames itself?