I watched screen recordings of my users being confused. So I stopped building onboarding and built AI instead.
Opencals is a booking platform for service businesses. Multi-staff, multi-location, orders, payments, customer management, service rules, capacity limits. It's a genuinely complex system because the problem it solves is genuinely complex.
But complexity has a cost.
At some point I realized the app had grown into something that looked and felt like a small Shopify. You've got customer management, order and payment management, locations, staff members, services, service-specific rules, availability configurations, and all the combinations between them. A new merchant signing up for the first time is staring at a lot of knobs.
I knew this was a problem. I just didn't understand how bad it was until I watched screen recordings.
The onboarding problem
I had onboarding. Several versions of it, actually. Tooltips, guided tours, video walkthroughs, help articles. I kept tweaking it. None of it moved the needle in any meaningful way.
The recordings were humbling. Merchants would land in the dashboard, click around for a few minutes, get stuck, and leave. Not because the features weren't there. Because they didn't have the mental map yet to know what to do first.
Here's the thing I eventually accepted: nobody actually reads documentation. Nobody watches tutorial videos when they're just trying a new app. They want to click three things, see something that looks right, and only then decide if this is worth their time. And that's completely reasonable - they're trying the app, not signing up for a course.
I'm building this alone. I can't hire a support team to manually onboard every merchant. And even if I could, that doesn't scale. I needed something else.
The obvious next step turned out to be the right one
Everyone is using AI now. That sounds obvious but it took me a while to actually act on it. When I added a chat interface powered by AI, I expected some merchants to use it. What I didn't expect was that nearly 100% of merchants use it, often before they touch any other part of the onboarding flow.
The moment you put a chat input in the UI with an AI behind it, people know what it is. They know they can ask questions. They know it will understand them. And unlike documentation, it responds to exactly what they're confused about, not what I guessed they'd be confused about.
But I didn't want to build a chatbot bolted onto the side of the app. I wanted to build something that's actually part of the infrastructure.
AI as an interface, not a feature
The AI in Opencals runs on the same API that the actual UI uses. The same API I've exposed publicly. When a merchant asks the AI to create a service, add a staff member, or block off a date range - it's making the same API calls that clicking the buttons would make. There's no separate logic for "AI mode." It's just another client.
This means merchants can do basically everything through the chat interface. They can configure the system, ask how something works, or just hand the AI a task and let it go. The chat interface isn't a simplified version of the app, it's a different way to use the full app.
For a new merchant who doesn't know where to start, this is the difference between leaving and staying. Instead of navigating a complex system they don't understand yet, they just describe their business. The AI handles the rest.
The development side: same infrastructure, completely different use case
Here's the part I didn't expect to matter as much as it does.
When I'm developing something new in Opencals, I can spin up an MCP server against the same documentation and API that the AI chat runs on. This means when I'm working on a new module or extending existing functionality, I have the full system context available without having to re-explain it to the AI every single session.
That sounds like a small thing. It isn't.
Before this, I'd spend the first 15–20 minutes of any AI-assisted development session re-establishing context: here's the data model, here's how this module works, here's what already exists. Now I don't have to. The AI already knows, because it's connected to the same source of truth the rest of the system uses.
My development velocity increased significantly. I haven't measured it precisely, but I'd estimate I'm shipping features 30-40% faster than before this was in place, purely from eliminated context overhead.
What I mean when I say "AI is infrastructure"
I've started thinking about this differently than I did 2 years ago.
AI used to feel like a feature you add. You pick a use case: chatbot, summary generator, search, whatever, integrate an LLM API, and ship it. It lives next to your real system.
That's not what I built. What I built is a system where AI is tightly bound to the actual code. It knows the data model. It knows the API surface. It knows the business rules. It can act as both a user-facing interface and a development accelerant because it's built on the same foundation as everything else.
I didn't realize this was the right architecture until I was most of the way through it. And I think I was lucky to be building it relatively early in Opencals' life: embedding this into a large, mature codebase after the fact would have been significantly harder. Starting close to the beginning meant the integration was natural instead of grafted on.
This is now how I'll approach every project going forward. Not "should we add AI?" but "where does AI live in the infrastructure from day one?"
I've been there with my PM tools. complex feature surface, screen recordings that were painful to watch. ended up building the AI layer before onboarding docs. haven't touched the tooltip backlog since.
This is a really solid shift — you didn’t fix onboarding, you removed the need for it 👍
Watching recordings → seeing confusion → replacing UI navigation with intent… that’s the right move.
The “AI as interface, not feature” part is the real unlock here.
Especially since it’s using the same API — that keeps everything clean and scalable.
One thing I’d watch going forward:
→ trust + control
People love asking AI, but they still want:
→ visibility on what’s being created
→ easy undo / edit
→ confidence nothing breaks silently
If you get that right, this becomes way more than onboarding — it becomes the primary way people use the product.
Also interesting that it improved your dev speed too — that’s a huge hidden win.
Curious — are users mostly asking setup questions, or actually using it to configure everything end-to-end?
Also, I’m running a small project (Tokyo Lore) where we highlight systems like this with a focused group of builders.
Since you’ve turned AI into core infra (not just a feature), this could be a strong fit — happy to share more 👍
Thanks, that's exactly the shift I am going for. The trust + control point is spot on and honestly something I'm still tightening. Right now users get visibility on what was created and can edit everything after, but "undo" concept isn't there yet. It's on the list.
On your question - it's actually pretty 50/50. Half is end-to-end workflows: setting up services, configuring staff schedules, locations. The other half is more conversational, "what's staff member's availability looking like on Friday", "why isn't this slot showing up". AI as a layer on top of the data, not just a setup wizard.
What kind of systems are you highlighting in Tokyo Lore? Curious what the format looks like.
That 50/50 split is actually a really strong signal — means it’s not just onboarding, it’s becoming a real interface layer 👍
On Tokyo Lore — it’s pretty simple:
→ small, focused rounds (limited entries)
→ builders submit real products/systems
→ we put them in front of other builders
→ and observe what actually sticks (what people notice, use, question)
So it’s less about hype, more about:
does this logic hold up with real users?
We usually look for things like:
→ clear underlying system (like your AI-as-interface approach)
→ real problem being solved
→ something that can be tested/validated quickly
Your “AI connected to the actual API” angle fits really well into that — it’s not just a feature, it’s a different way to use the product.
Happy to share details if you decide to explore it 👍
"AI as infrastructure, not a feature" — this is exactly the mental shift I went through. I'm building a sports intelligence platform and had the same realization from the opposite direction. Instead of building a dashboard where humans browse sponsorship data, I built an API that AI agents query directly. Same data, same scoring logic, but the primary interface is an MCP endpoint, not a UI. Your point about context overhead is huge. Before I connected my docs and API spec to an MCP server, every dev session started with 15 minutes of "here's what this codebase does." Now it just knows. That 30-40% velocity increase sounds about right — maybe even conservative. The part about onboarding really resonates too. You solved "users don't read docs" by giving them a chat interface. I'm solving the same problem from the agent side — agents don't read docs either, they need structured endpoints they can query. Same principle, different audience. Curious about one thing: when your AI makes API calls on behalf of merchants, how do you handle edge cases where the AI misinterprets intent? Like if someone says "remove my Thursday availability" but means just one Thursday, not every Thursday?Exactly! Your MCP endpoint approach is the right mental model - the interface IS the product when agents are the consumers.
And yeah, you're touching on something I didn't get into in the post: the context management problem is brutal. It's not just "feed AI your docs once" — it's keeping that context current. Every time I ship a feature, change an API endpoint, or update a flow, the AI needs to know. Otherwise it confidently explains functionality that no longer exists.
My current solution: orchestration across multiple AI agents, each with a different model and a specific role, plus caching. The orchestrator, the one actually talking to the user, runs on a high-reasoning model with thinking enabled. That alone handles 70–80% of ambiguity cases. It's more expensive, but for something that's acting on a user's calendar, you want it to think, not just pattern-match.
On your Thursday question specifically: the answer is instructions. You literally write into the system prompt: "before modifying recurring events, always confirm whether the user means this occurrence or all future ones." The AI doesn't infer this on its own - you have to teach it the edge cases you've already identified. Then it asks for confirmation before doing anything irreversible.
It's not magic, just prompt engineering + the right model tier + confirmation gates for any destructive action. With those three things, you get to ~90% reliability, which is actually good enough for most real-world use.
Love the focus on operational complexity. Most 'booking' tools fall apart the second you add a second location or a rotating team. Usage-based pricing is a smart move for this, too—good luck with the launch!
Hey saw your OpenCalls tool- looks super useful for AI builders and firms
I'm running AnyAI hub, a marketplace built specifically for vertical AI tools.
Would love to have OpenCalls LISTED here. first 6 months are totally free, no fees at all, and I'll personally help set up the listing
interested? i can send you the direct listing link.
Using the same API the UI uses is the part that actually matters. most "AI onboarding" tools just bolt an LLM on top of existing flows and ask it to explain what the screen does. When the agent is on the same substrate as the product, it knows what's actually happening, not just what's visible.
We've been building at a similar layer with StoreMD. the monitoring agent doesn't call a separate endpoint to describe what's wrong with a store. It runs the same checks the product runs internally. The responses are grounded in a way that's genuinely hard to fake with a wrapper.
The 30-40% dev velocity gain makes sense from that angle too. You stop writing docs nobody reads and start shipping product surface the agent can use directly.
What does the failure mode look like when a merchant asks something the agent can't handle yet?
If you're building something with embedded AI, I'd be curious how you approached it :)