I got frustrated with my own prompts.
Not because AI is bad. Because I kept writing walls of text and getting mediocre output back. I'd tweak one sentence, re-run, tweak another. It felt like guessing.
So I built flompt.
The core idea: instead of writing prompts as one big blob, you break them into 12 typed blocks. Role. Audience. Context. Objective. Constraints. Examples. Chain of thought. Output format. And so on. Each block has a specific job. You edit them independently on a visual canvas, then compile everything into Claude-optimized XML.
The difference in output quality is noticeable immediately. Not because the model got smarter. Because the instructions got clearer.
What I shipped:
claude mcp add flompt https://flompt.dev/mcp/The numbers so far:
No paywall. No account required. Just built it, shipped it, and started getting users. Still early but the extension has been picking up installs daily.
What I'd do differently:
Ship the extension first, not last. The web app is cool but the extension is what makes people go "oh, this is actually useful right now."
Stack: React + TypeScript + React Flow + Zustand (frontend), FastAPI + Claude API (backend), Caddy for the reverse proxy.
It's free and open-source.
Try it: https://flompt.dev
Repo (a star means a lot for a solo project): https://github.com/Nyrok/flompt
Happy to answer any questions about the build.