A few months ago I was building internal tools for a client and kept running into the same wall:
Connecting an LLM to a real API always meant writing a ton of glue code.
Every time, from scratch.
So I built Orqis to solve it for myself. Then I figured other people probably had the same problem.
You paste an OpenAPI spec URL.
Orqis reads it, generates a set of typed tools for every endpoint, and spins up a conversational agent that can call your API in natural language.
No code. No prompt engineering. No LangChain boilerplate.
The whole thing — from spec URL to working agent — takes under a minute.
The hardest part wasn't the AI.
It was making the agent reliable:
Most of the iteration was on the agent runtime, not the UI.
Would love feedback from anyone who’s: