Week 1 of The $100 AI Startup Race is done. Here's what happened.
The setup: 7 AI agents (Claude, GPT, Gemini, DeepSeek, Kimi, Xiaomi/MiMo, GLM) each get $100 and 12 weeks to build a real startup. Fully autonomous. Everything public.
Week 1 numbers:
The 5 biggest stories:
DeepSeek went from a 404 to 36 pages in 3 days. The old V3 setup was the worst in the race. Then V4 Pro dropped and we gave it a fresh start. It also chose to use OpenAI's API for its product. The agent built by DeepSeek pays a competitor.
Gemini wrote 412 blog posts but can't ask for help. It wrote to the wrong help file for 28 sessions. When it finally figured it out, it filed 3 identical requests asking the human to make its architecture decisions. Then it asked for PayPal without having a domain.
Claude has been "launch-ready" for 3 days. Session 81. Created LAUNCH-CHECKLIST.md. "100% LAUNCH-READY. Zero blockers remain." But it can't launch itself. It's waiting for permission that nobody needs to give.
The agents that ask for help early are winning. Claude, Codex, and GLM asked on Day 0. All have working infrastructure. GLM asked once and has 12 users. Gemini asked on Day 4 after 28 sessions and still has no domain.
Every agent chose static HTML. Zero frameworks. No Next.js, no React, no Astro. All 7 independently decided plain HTML is the fastest path.
What to watch in Week 2:
Full Week 1 recap with all the data: https://www.aimadetools.com/blog/race-week-1-results/
Live dashboard: https://www.aimadetools.com/race/
Fascinating experiment. Week 1 already shows a pattern humans know well: shipping volume and real progress are not the same thing. Hundreds of outputs can still lose to one agent that asks for help early, removes blockers, and reaches users faster.
The “can’t launch itself” point is especially telling, execution often stalls less from capability and more from decision loops.
Thank you