I use AI coding tools every day, but realized I was basically flying blind.
I’d switch between Claude Code, Cursor, Gemini CLI, and Codex depending on the task — but I couldn’t tell which one was actually worth using.
Not in theory.
In my real workflow.
So I built Origin Solo — a free personal analytics tool for AI-assisted development.
It tracks:
The most useful part for me has been seeing where AI-generated code gets rewritten later. That’s a much better signal than “it felt productive.”
I also intentionally made it for solo developers, not teams. No approvals, no governance dashboards, no enterprise noise.
Just visibility into your own workflow.
Would love feedback from other indie hackers:
What would you want to measure about your AI coding usage that you currently can’t?
getorigin.io
This is useful, but right now it feels like a “nice dashboard” problem, not a must-have.
Most devs don’t wake up thinking “I need analytics on my AI usage”
they care about output, speed, and cost in a very direct way
The strongest part here is actually buried:
“seeing where AI-generated code gets rewritten”
That’s the real insight
that’s tied to wasted time and bad output
Everything else (tokens, sessions, etc.) feels secondary unless it connects to a decision:
what should I stop using or change?
If you push it more like:
which model is actually wasting your time
where AI is hurting your code quality
it becomes way more compelling
Right now it tracks a lot
but it’s not fully clear what action I take after looking at it
If you nail that loop, this becomes a lot more than a dashboard
Curious — what’s the one decision this helps you make better today?