Hey Indie Hackers 👋
I recently launched Lumra, a prompt management platform built for developers, indie builders, and prompt engineers who are working seriously with AI.
As AI-powered products grow, prompts stop being simple inputs and start becoming core logic.
They evolve, get refined, reused, broken, fixed — just like code.
But most tools don’t treat prompts that way.
In practice, prompts usually end up:
• Scattered across Notion pages and Google Docs
• Copied between projects with no version history
• Tweaked ad-hoc without clarity on what actually changed or why
That friction was slowing me down in my own projects — so I built Lumra.
What Lumra focuses on
Lumra is designed to make prompt engineering feel closer to software engineering:
• Structured prompt storage instead of loose text
• Versioning so you can iterate safely
• Clear organization across projects and use cases
• Reusability without copy-paste chaos
The goal isn’t to generate prompts for you — it’s to help you manage, refine, and scale the prompts you already care about.
Who it’s for
• Indie hackers building AI-powered products
• Developers shipping LLM features in real apps
• Prompt engineers who want clarity and control
• Anyone tired of losing “that one good prompt”
Indie-built, early-stage
Lumra is fully indie-built and still early.
I’m actively shaping it based on real usage and feedback from builders.
👉 Try it here: https://lumra.orionthcomp.tech
I’d love to hear:
• How you currently manage prompts
• What breaks in your workflow
• What you’d expect from a “GitHub for prompts”
Thanks for reading — and happy building
The "GitHub for prompts" framing is compelling. Prompts really do evolve like code - they have edge cases, they break in unexpected contexts, they need to be tested against different inputs. The versioning angle is smart.
Curious about a few things:
How do you handle prompt dependencies? A lot of my prompts reference other prompts (like a system prompt that calls a formatting template). Does Lumra support that kind of composition?
What's the collaboration story? If I'm iterating on prompts with a team, can we see who changed what and why - like commit messages for prompts?
How do you think about testing? The hardest part of prompt engineering isn't writing - it's knowing if version B is actually better than version A. Any built-in way to compare outputs across versions?
The "loose text scattered across Notion" problem is real. I've lost track of good prompts more times than I'd like to admit.