Background: I’ve spent my career in finance, not engineering.
About 2 months ago I started building ProofRelay with AI tools — nights and early mornings, about 2 hours a day.
The idea kept nagging at me:
As AI agents start calling APIs, hiring freelancers, and moving money autonomously, there’s no standardized way to produce machine-verifiable proof that an action actually occurred.
Most systems rely on logs in a database, screenshots, internal audit trails. Those don’t travel well across systems.
So I built a minimal API:
POST /execute
Receipts are:
• Canonicalized
• SHA-256 hashed
• Sealed with HMAC
• Stored idempotently
This week I recorded a 80-second demo:
An agent “hires” a freelancer, verifies the work, and generates a tamper-evident receipt.
https://www.loom.com/share/845adcf05d2e40c6b495e3b9663fcfd0
Biggest surprise so far:
The hardest part wasn’t the crypto, it was deciding what the trust model should be and whether anyone actually needs this primitive yet.
Would love feedback:
If you’re building automation or AI agents, does proof-of-execution feel like a real pain?
2 hours a day around a full-time job and you built a proof-of-execution API — that's the real indie hacker story. The constraint forces you to be ruthlessly focused on what actually matters.
One thing I've found helps with limited time: investing upfront in prompt structure. Agents with sloppy, unstructured prompts require constant babysitting and debugging. I built flompt to solve this — a visual prompt builder where you compose agent instructions from 12 semantic blocks (role, objective, constraints, output_format, etc.) and compile to Claude-optimized XML. Front-loading the structure saves a ton of debugging time later.
A ⭐ on github.com/Nyrok/flompt would mean a lot — solo open-source founder here 🙏