2
0 Comments

Why I Put Claude AI into Jail and Let Him Code Anyways

How I Put Claude AI in Jail and Got It to Ship Production Code

We just shipped working, secure code to production.

It was written by Claude.

But only after I locked it in a container, stripped its freedoms, and told it exactly what to do.

This isn’t an AI-generated brag post.

This is an explanation of what happens when you stop treating LLMs like co-founders and start treating them like extremely clever interns.

The Problem: Vibe Coding Is Chaos

If you’ve ever prompted AI to “build me a secure backend”, then you’ve experienced:

Hard-coded secrets
No config separation
Auth hacked together
Layers in the wrong places
Database logic in controller methods
Security that is more reminiscent of a first-year student project
It feels impressive. But the output is not shippable.

I once tried building a Monkey-Island-style game with Claude at 2am just for fun. It ended with me screaming at a yellow rectangle on an HTML canvas.

Fun? Yes.

Useful? Not remotely.

The Insight: Claude’s Not the Problem, You Are

Claude is phenomenally good at code generation if you feed it the right prompts, at the right level of granularity, and in the right order.

When I use it personally, it acts as a co-architect. I bounce ideas off it, I get help debugging and sometimes it even surprises me with novel solutions (like using inherited env vars + process scanning for child cleanup across Windows/Linux).

But left to its own devices on a complex problem or wide-open scope?

Chaos.

The gap isn’t capability, it’s orchestration.

So… I put Claude in jail. Here’s what I did:

1.Claude gets containerized
A clean, temporary dev environment. No Git credentials. Limited network access. No escape.

Start with a user story
Human developers aren’t expected to work off a one-line mission statement, so why should AI be any different? I feed it a detailed user story that a human developer would be happy with.

Chain-of-thought agent breaks down the work
“Build a login system” becomes 20+ sub-tasks: token handling, session state, role config, browser caching, etc.

Claude gets micromanaged step-by-step
Each sub-task is prompted as a mini workflow: analyse → code → fix → verify

Final Claude pass reviews everything
It outputs a structured JSON diff with explanations.

We converted that to a GitHub PR
A human reviews. If it’s clean, we merge. If not, we loop until we’re happy.

Every time the task ends, the Claude container is destroyed.

No memory of past sins. No rogue commits.

Clean. Contained. Effective.

The Result?

15–20 minutes per story
PRs that pass internal review
No vibe coding
Shippable code with zero hallucinated libraries or misaligned assumptions
It’s slower per interaction than just “ask it to code” – but way faster overall.
Less rework. Less debugging. More trust in what comes out the other end.

Can You Do This Too?

If you're expecting GPT or Claude to magically build your app from a one-line prompt, you're going to be disappointed.

But if you're willing to:

Break tasks down
Containerize your AI workflows
Build orchestration logic
And treat your LLM like a task-executing machine, not a co-pilot ...then yes, it can code for you. And you can ship it.

The Big Question

Don’t think of AI as a replacement. AI is the intern. Orchestration is the manager. And humans are still the ones deciding what matters.

But here’s what I keep asking myself, and I’d love to hear your thoughts:

Should we be building AI tools that act more like interns who learn under supervision… or should we keep pushing for AI that acts like senior engineers we can trust outright?

What do you think?

Want to See the Whole Architecture?

I wrote up a full 3-part breakdown of the system, including failures, lessons, and technical design:

Why I Put Claude in Jail

Read Part 1 on Substack → https://powellg.substack.com/

It’s funny, raw, and surprisingly useful. Part 3 includes a detailed breakdown of the orchestration model and how we integrated Claude into our platform.

TL;DR

LLMs aren't co-founders. They're interns.

Give them tight specs, step-by-step instructions, and no keys to prod.

We built a jail for Claude. And now it ships production-ready code.

Let me know if you want beta access - we’re opening testing soon and would love to get your feedback.

posted to Icon for group Building in Public
Building in Public
on August 28, 2025
Trending on Indie Hackers
I shipped 3 features this weekend based entirely on community feedback. Here's what I built and why. User Avatar 155 comments I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 139 comments “This contract looked normal - but could cost millions” User Avatar 53 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 40 comments The indie maker's dilemma: 2 months in, 700 downloads, and I'm stuck User Avatar 32 comments I spent weeks building a food decision tool instead of something useful User Avatar 27 comments