For years I thought the visual editor was the holy grail for no-code A/B testing. Click on a headline, change the text, ship a variant without touching code. That was the dream.
And honestly, it worked… until it didn’t.
As sites became more complex, the visual editor started breaking in all the ways growth teams already complain about:
Modern websites are built with nested components, dynamic rendering, responsive layouts, and all sorts of weird edge cases. The visual editor often grabbed the wrong element or broke something on mobile. You’d fix one thing, and something else would glitch.
The moment the editor couldn’t correctly identify an element, it would ask for a CSS selector. At that point, most non-technical users would freeze. It stopped being “simple,” even if the interface still looked friendly.
Small cosmetic changes were fine. But anything involving pricing components, layouts, modals, or dynamic content became a headache. The editor just wasn’t built for real product-level experimentation.
After fighting this for long enough, I finally accepted that the visual editor wasn’t the future. It was a clever workaround for an older era of the web.
So I built something else: Generative Experimentation (GX)
Instead of clicking around on a page and hoping the editor guesses the right element, you just describe the change you want in plain language.
Example:
“Please help me reposition this side thumbnail selection to the bottom of the main product image.”
Under the hood, the system:
No dragging boxes. No CSS selectors. No fear of breaking the layout.
It basically shifts A/B testing from “manually editing the DOM” to “expressing the idea and letting the system figure out the implementation.”
People move faster. Developers stop doing low-leverage testing tasks. Marketers aren’t blocked. And experiments that used to take a day now take a few minutes.
Honestly, I didn’t expect this to feel like such a big shift… but it kind of is.
Are you also seeing the limits of visual editors?
Would you use a plain-English way of generating experiments?
Anything you’d expect a tool like this to handle that others don’t?
If you’re curious what I built, here’s the product page (not trying to hard-sell, just sharing): https://www.mida.so/generative-experimentation
Surely having both is the best way? The ability to test colors with a UI is 10x faster and cheaper than doing it with AI, but having the AI handle more complex tasks make sense.
The issue is most no-code platforms are built on custom JSON not HTML/code.