Hey IH,
Quick background: I ran GeekyAnts (a dev agency) for 10+ years. Built several open-source tools (NativeBase, gluestack). Started RapidNative about a year ago as an AI mobile app builder.
v1 did well. Over 200,000 screens generated. But the honest feedback was: "nice screens, where's the backend?" People wanted to build real apps, not just UI.
So we spent the past year doing a complete rebuild. RapidNative v2 launched and here's what actually changed:
The infrastructure is all open source: ReactNative.run for the browser runtime, Vibecode DB for the database layer (one schema, any backend, zero vendor lock-in).
What's next: RevenueCat integration (payments), Google Maps, AI SDK, admin dashboard, analytics, one-click deploy.
Would love feedback from anyone who's tried other AI app builders. What worked? What didn't? What would make you switch?
Try it at https://rapidnative.com?utm_source=indiehackers&utm_medium=social&utm_campaign=v2_launch
The v1 → v2 call is right. The backend gap is what kills early AI builders every time. The next wall worth planning for: stateful data during re-prompts. When users iterate on their app description after they've added real records, the regenerated schema creates migration pressure the LLM doesn't handle cleanly. At the 90-day mark we've seen this repeat: users tweak their prompt, get a fresh schema, and their existing data is now orphaned. The Vibecode DB 'one schema, any backend' framing is smart for this reason. Curious how you're handling schema migrations when someone changes their data model after they've already started entering real data?
The fullstack jump from v1 is the right call — "nice screens, where's the backend?" is the wall every AI builder hits around month two.
The thing that bites most teams here isn't the initial generation — it's schema drift on iteration. You generate a schema, users start storing real data, then someone requests a new field or a relationship change. The generator wants to recreate the table. If you're not versioning the schema as a first-class artifact (migration files, not just the current state), you end up with a conflict between "what the AI wants to generate" and "what's actually in production." That gap quietly kills user trust faster than any UI bug.
The PWA choice makes sense for the validation window — app store friction at early traction is a tax you don't need. The tricky part is when users expect native-feeling offline behavior, but that's a later problem.
Genuinely useful thing you've built. The backend generation piece is the hardest part to get right and most tools just don't.
How are you handling schema migrations when users iterate on their data model after they've already got real data in it?
That gap between “looks good” and “actually usable” is real.
Most tools get you to something impressive fast, but once you try to use it for anything slightly real, things start to feel fragile — data, auth, edge cases, all the parts that aren’t visible in the demo.
The PWA angle is interesting though. Skipping app store friction for early validation is a big deal.
Curious how far the generated backend actually goes in practice — does it stay manageable once you start changing things, or does it become hard to reason about pretty quickly?
Interestin rebuild. I have tried most of the AI app builders out there. The gap between "demo looks great" and "actually usable for real product work" is usually massive, so I appreciate you being upfront about v1's limitations.
The part that actually caught my attention is the PWA / Add to Homescreen flow. Validating with real users without going through App Store review is genuinely useful — most of my clients get stuck at that step.
Quick question before I try v2: when you generate an app from a Figma file or screenshot, how faithful is the output to the original design? In most tools I've tried, the "Figma import" is more of a loose suggestion than an actual translation. Curious where v2 lands on that.
Will report back once I've had a proper play with it.