Most APIs weren't built for machines to use. That sounds obvious, but the data surprised me.
I've been scoring 387 APIs across 11 signals that measure how well they work with AI coding agents - things like auth method, error message quality, whether pricing is machine-readable, CLI tooling, rate limit headers. The kind of stuff that determines whether Claude Code or Cursor can actually use an API without a human babysitting it.
Here's what three weeks of data collection turned up.
60%+ of popular APIs have no machine-readable pricing.
This one floored me. If you ask an agent to "find the cheapest email API for my use case," it literally can't. Pricing lives in marketing pages, behind "Contact Sales" buttons, or in PDFs. The agent has nothing to parse. Stripe, Twilio, a handful of others publish pricing in their API or docs in a structured way. Most don't. It's a massive blind spot.
Only ~30% ship CLI tools.
If your agent is running in CI - deploying, testing, wiring up integrations - it needs a CLI. No browser. No OAuth redirect dance. Just api-tool configure --key xxx and go. Most APIs assume a human is clicking through a dashboard. That assumption is breaking fast.
Env var auth is king.
APIs that let you export API_KEY=xxx and go score dramatically higher on agent-friendliness than anything requiring browser-based OAuth. It's not that OAuth is bad. It's that agents in headless environments can't open a browser to approve a consent screen. The APIs that figured this out early - Anthropic, OpenAI, Resend, Postmark - just work. The ones that didn't are effectively invisible to agents.
Community sentiment is a leading indicator.
I aggregated 280+ reviews from GitHub issues, Reddit threads, and StackOverflow. The correlation between "developers complain about this API's DX" and "agents struggle with this API" is almost 1:1. Bad docs for humans means bad docs for machines. Turns out the bar is the same.
The project is CLIRank (clirank.dev) - it scores APIs on agent-friendliness and ships as an MCP server so your agent can query it directly. Three weeks old, about 300 uniques/day.
What's working: the data angle. People share the scores because they're surprised by them. Devs love arguing about whether their favourite API deserves a 7 or a 9.
What's not working: conversion from visitor to MCP installer. Getting someone to npm install clirank-mcp-server is a bigger ask than I expected. The "aha moment" only hits once the agent actually uses it, but there's friction before that.
What I'd do differently: I spent too long on the web UI early on. The MCP server is the actual product. The website is just the shop window. Should have gone MCP-first from day one.
Anyone else building tools for the AI-agent layer? Curious what distribution channels are actually working for you - because traditional SEO feels irrelevant when your end user is a machine, not a person.
This is solid — especially the pricing + CLI points. Feels like you’re actually mapping where agents break.
One thing I’d push though:
Right now this reads like:
“API directory with scores”
But the real value is closer to:
→ “which APIs won’t break your agent in production”
That’s a very different use case.
Because devs don’t really care about “scores”
they care about:
→ “will this work without me babysitting it?”
If that’s the core, then the product isn’t a directory — it’s more like a reliability filter for agents
That shift alone could change how people approach it (less browsing, more decision-making)
Also — small but important:
CLIRank doesn’t really carry that “agent-safe / reliability” signal
feels more like a generic ranking tool
If you lean into that sharper positioning, the name should pre-frame that (so people instantly know when to use it)