1
3 Comments

A behind-the-scenes look at how we manage LLM keys without seeing user data

As we started shipping more LLM-powered features, one problem kept coming up: API keys and cost visibility.

Between multiple providers, different environments, and growing usage, it became hard to answer basic questions:

  1. Where are the keys?
  2. Who’s using what?
  3. How much is this actually costing us?

We didn’t want a solution that required logging prompts or responses, or pulling sensitive data into a central backend.

So we built our own setup.

At a high level:

  • API keys are encrypted client-side and never stored in plaintext
  • We use a single virtual key instead of juggling provider-specific keys
  • Usage tracking is metadata-only (token counts, model names, timing)
  • No prompts or responses are collected
  • Inference stays on the client, so it works with cloud APIs and local models like llamafile

We ran this quietly in a small alpha to see if it held up in real usage.

It is now in open beta and free.

We’re fixing issues as they come up.

I’m sharing this mostly to sanity-check the approach with other builders:

  • How are you handling LLM keys today?
  • At what point did cost tracking become painful for you?
  • What’s missing for this to be actually useful day-to-day?
on January 20, 2026
  1. 2

    This resonates — I'm building an AI-powered tech news aggregator and managing API costs across multiple providers is one of those "hidden complexity" problems that compounds quickly.

    To your questions:

    How I'm handling keys today:
    Environment variables + provider-specific dashboards. It works, but I'm checking 3-4 different dashboards to understand monthly spend. The "virtual key" abstraction you mention sounds like it would simplify this significantly.

    When cost tracking became painful:
    Around $50-100/month. At that point, I needed to know which features were driving costs, not just total spend. Token counts by endpoint would've been helpful.

    What would make this useful day-to-day:

    • Alerts when usage spikes unexpectedly (e.g., "You're on track to spend 3x more than last week")
    • Per-feature breakdowns if you're running multiple AI features
    • Comparison view across providers (for those of us still deciding between Claude vs GPT for different tasks)

    The "no prompts/responses logged" part is important — that's usually the dealbreaker for trying third-party key management solutions.

    Are you planning to support cost estimation before requests? That would be huge for setting up rate limits or showing users "this action costs ~X tokens."

    1. 1

      Thanks for sharing this, yamamoto. That’s a unique angle; I'll raise it internally and think it through more.

      1. 2

        Appreciate you considering it. Looking forward to seeing how the tool evolves.

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 142 comments “This contract looked normal - but could cost millions” User Avatar 54 comments A simple way to keep AI automations from making bad decisions User Avatar 53 comments Never hire an SEO Agency for your Saas Startup User Avatar 41 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments The indie maker's dilemma: 2 months in, 700 downloads, and I'm stuck User Avatar 40 comments