As we started shipping more LLM-powered features, one problem kept coming up: API keys and cost visibility.
Between multiple providers, different environments, and growing usage, it became hard to answer basic questions:
We didn’t want a solution that required logging prompts or responses, or pulling sensitive data into a central backend.
So we built our own setup.
At a high level:
We ran this quietly in a small alpha to see if it held up in real usage.
It is now in open beta and free.
We’re fixing issues as they come up.
I’m sharing this mostly to sanity-check the approach with other builders:
This resonates — I'm building an AI-powered tech news aggregator and managing API costs across multiple providers is one of those "hidden complexity" problems that compounds quickly.
To your questions:
How I'm handling keys today:
Environment variables + provider-specific dashboards. It works, but I'm checking 3-4 different dashboards to understand monthly spend. The "virtual key" abstraction you mention sounds like it would simplify this significantly.
When cost tracking became painful:
Around $50-100/month. At that point, I needed to know which features were driving costs, not just total spend. Token counts by endpoint would've been helpful.
What would make this useful day-to-day:
The "no prompts/responses logged" part is important — that's usually the dealbreaker for trying third-party key management solutions.
Are you planning to support cost estimation before requests? That would be huge for setting up rate limits or showing users "this action costs ~X tokens."