2
0 Comments

Technical deep-dive: how our AI generates Playwright tests

A few people have asked how the AI test generation in ObserveOne actually works, so here's the honest breakdown:

It doesn't just hallucinate tests and hope for the best. There's an actual pipeline:

  1. It crawls your app like a user would — maps pages, interactions, the whole thing
  2. Picks out the flows that matter (login, checkout, search, CRUD)
  3. Generates standard Playwright tests — nothing proprietary
  4. Runs them immediately and fixes any failures on its own
  5. Self-heals when your UI changes down the road

The output is just normal Playwright code. You can read it, edit it, export it, run it locally. No vendor lock-in at all.

Honestly, building this was the hardest thing I've worked on. The crawling alone took 3 months to get right. But the end result is tests that actually work in production, not just in a demo environment.

Happy to answer any technical questions. What would make you trust — or not trust — AI-generated test code?

on April 2, 2026
Trending on Indie Hackers
I built a tool that shows what a contract could cost you before signing User Avatar 111 comments The coordination tax: six years watching a one-day feature take four months User Avatar 73 comments My users are making my product better without knowing it. Here's how I designed that. User Avatar 63 comments A simple LinkedIn prospecting trick that improved our lead quality User Avatar 52 comments I changed AIagent2 from dashboard-first to chat-first. Does this feel clearer? User Avatar 39 comments Why I built a SaaS for online front-end projects that need more than a playground User Avatar 16 comments