I've been building Anve Voice — a voice AI widget that actually takes actions on your website, not just answers FAQs.
The problem I kept seeing: most voice bots are fancy chatbots. They talk, but they don't DO anything. Visitors still have to click around, fill forms, navigate menus.
So I built something different.
Anve Voice converts speech → intent → actual DOM actions. It can:
• Click buttons and links
• Fill out forms
• Navigate between pages
• Guide visitors through complex workflows
The technical challenge was bridging voice intent to browser automation reliably. Voice is messy — accents, fillers, interruptions. Converting that into precise actions on a dynamic DOM requires tight feedback loops and semantic understanding of page structure.
Why this matters for conversions: voice removes friction. Speaking is 3x faster than typing. When visitors can just SAY "show me pricing" or "book a demo" and it actually happens — conversion rates jump.
Built this over the last few months while talking to SaaS founders and e-commerce operators. The real insight? People don't want another chat widget. They want an interface layer that actually gets things done.
Live at https://anvevoice.app — would love feedback from the IH community on the approach.
This is a strong direction 🔥 most “AI voice” tools stop at conversation, but action-based UI control is a real step up. The DOM execution angle is especially interesting for conversion flows.
Also, for builders here — there’s a live idea comp where you can submit your idea for $19, winner gets a Tokyo trip. Just opened, so early stage = best chances.