I’ve been exploring whether AI needs a more structured interface than raw natural language.
Built an early prototype called GALEN:
speech/text → compact structured instruction → AI
So far it reduces input size and improves consistency. Curious if others see value in this layer.
This is interesting — especially the “reduce input → improve consistency” angle. That’s a real pain point most people don’t articulate well.
Right now though, it feels like you’re building an input layer, but describing it in a very abstract way.
If this is about turning messy human intent into clean instructions for AI, that’s a much clearer wedge.
The risk is: if people don’t immediately get what this does in 2–3 seconds, they’ll default back to raw prompting.
Also, GALEN doesn’t really signal what this layer is doing. For something this foundational, clarity in positioning (and naming) will matter a lot for adoption.
Curious — are you thinking of this more as a developer tool, or something non-technical users would rely on daily?