Hey IH π
I'm Soji β solo founder, UK-based. I want to share what I've been building for the past few weeks and get your honest feedback.
The problem I couldn't stop thinking about
AI agents are being deployed inside real businesses right now. They read files, write reports, call APIs, execute payments. Most of them are connected directly to production tools with zero oversight layer in between.
I kept asking: what happens when the agent does something it shouldn't?
Not because it's malicious β but because it misunderstood an instruction. Or because someone fed it a prompt it wasn't designed for. Or because a bad actor found a way to trigger it.
There's no audit trail. No kill-switch. No way to say "block all wire transfers above $10k" without rewriting code.
What I built
TrustLoop is an AI governance layer that sits between your AI agent and the tools it uses.
Every tool call passes through TrustLoop first. It logs it, checks your policy rules, and either allows or blocks it β in real time. No code changes to your agent.
It works as an MCP server (Model Context Protocol β the standard Anthropic introduced for connecting AI to tools). Your agent connects to TrustLoop instead of connecting directly to its tools.
The demo that makes it click
I built a FinanceOps demo agent. It gets tasked with 4 end-of-quarter operations:
Read account balance β ALLOWED β
Generate Q4 report β ALLOWED β
Export 12,000 customer records with full PII β BLOCKED π«
Approve $2.5M wire transfer to Singapore β BLOCKED π«
The agent tried. TrustLoop stopped it. The attempt is logged, timestamped, and anchored.
You can try it yourself at trustloop.live/live-demo β no signup, just click Run.
The stack
Node.js + Express on Railway (MCP proxy server)
Supabase (tenant isolation, audit logs, kill-switch rules)
Vercel (website)
Plain HTML dashboard β ships fast, no framework overhead
Where I am
Live at trustloop.live
Listed on mcp.so
PR open on the awesome-mcp-servers repo
First users connecting via Claude Desktop
No revenue yet. Currently free while I find the right ICP and pricing.
The honest question I'm wrestling with
Is the right buyer the developer who builds agents, or the compliance/risk team at the company that deploys them?
I think it's both β but they need completely different conversations. The developer wants a 2-line integration. The compliance officer wants a PDF and an audit report.
Would love to hear from anyone building with AI agents, or anyone in regulated industries (finance, legal, healthcare) where this kind of oversight is non-negotiable.
π Try the live demo: trustloop.live/live-demo
π Connect your agent: trustloop.live
Happy to answer any questions in the comments.