5
3 Comments

I stopped building onboarding and built AI instead.

I watched screen recordings of my users being confused. So I stopped building onboarding and built AI instead.


Opencals is a booking platform for service businesses. Multi-staff, multi-location, orders, payments, customer management, service rules, capacity limits. It's a genuinely complex system because the problem it solves is genuinely complex.

But complexity has a cost.

At some point I realized the app had grown into something that looked and felt like a small Shopify. You've got customer management, order and payment management, locations, staff members, services, service-specific rules, availability configurations, and all the combinations between them. A new merchant signing up for the first time is staring at a lot of knobs.

I knew this was a problem. I just didn't understand how bad it was until I watched screen recordings.


The onboarding problem

I had onboarding. Several versions of it, actually. Tooltips, guided tours, video walkthroughs, help articles. I kept tweaking it. None of it moved the needle in any meaningful way.

The recordings were humbling. Merchants would land in the dashboard, click around for a few minutes, get stuck, and leave. Not because the features weren't there. Because they didn't have the mental map yet to know what to do first.

Here's the thing I eventually accepted: nobody actually reads documentation. Nobody watches tutorial videos when they're just trying a new app. They want to click three things, see something that looks right, and only then decide if this is worth their time. And that's completely reasonable - they're trying the app, not signing up for a course.

I'm building this alone. I can't hire a support team to manually onboard every merchant. And even if I could, that doesn't scale. I needed something else.


The obvious next step turned out to be the right one

Everyone is using AI now. That sounds obvious but it took me a while to actually act on it. When I added a chat interface powered by AI, I expected some merchants to use it. What I didn't expect was that nearly 100% of merchants use it, often before they touch any other part of the onboarding flow.

The moment you put a chat input in the UI with an AI behind it, people know what it is. They know they can ask questions. They know it will understand them. And unlike documentation, it responds to exactly what they're confused about, not what I guessed they'd be confused about.

But I didn't want to build a chatbot bolted onto the side of the app. I wanted to build something that's actually part of the infrastructure.


AI as an interface, not a feature

The AI in Opencals runs on the same API that the actual UI uses. The same API I've exposed publicly. When a merchant asks the AI to create a service, add a staff member, or block off a date range - it's making the same API calls that clicking the buttons would make. There's no separate logic for "AI mode." It's just another client.

This means merchants can do basically everything through the chat interface. They can configure the system, ask how something works, or just hand the AI a task and let it go. The chat interface isn't a simplified version of the app, it's a different way to use the full app.

For a new merchant who doesn't know where to start, this is the difference between leaving and staying. Instead of navigating a complex system they don't understand yet, they just describe their business. The AI handles the rest.


The development side: same infrastructure, completely different use case

Here's the part I didn't expect to matter as much as it does.

When I'm developing something new in Opencals, I can spin up an MCP server against the same documentation and API that the AI chat runs on. This means when I'm working on a new module or extending existing functionality, I have the full system context available without having to re-explain it to the AI every single session.

That sounds like a small thing. It isn't.

Before this, I'd spend the first 15–20 minutes of any AI-assisted development session re-establishing context: here's the data model, here's how this module works, here's what already exists. Now I don't have to. The AI already knows, because it's connected to the same source of truth the rest of the system uses.

My development velocity increased significantly. I haven't measured it precisely, but I'd estimate I'm shipping features 30-40% faster than before this was in place, purely from eliminated context overhead.


What I mean when I say "AI is infrastructure"

I've started thinking about this differently than I did 2 years ago.

AI used to feel like a feature you add. You pick a use case: chatbot, summary generator, search, whatever, integrate an LLM API, and ship it. It lives next to your real system.

That's not what I built. What I built is a system where AI is tightly bound to the actual code. It knows the data model. It knows the API surface. It knows the business rules. It can act as both a user-facing interface and a development accelerant because it's built on the same foundation as everything else.

I didn't realize this was the right architecture until I was most of the way through it. And I think I was lucky to be building it relatively early in Opencals' life: embedding this into a large, mature codebase after the fact would have been significantly harder. Starting close to the beginning meant the integration was natural instead of grafted on.

This is now how I'll approach every project going forward. Not "should we add AI?" but "where does AI live in the infrastructure from day one?"

posted to Icon for Opencals
Opencals
  1. 2

    "AI as infrastructure, not a feature" — this is exactly the mental shift I went through. I'm building a sports intelligence platform and had the same realization from the opposite direction. Instead of building a dashboard where humans browse sponsorship data, I built an API that AI agents query directly. Same data, same scoring logic, but the primary interface is an MCP endpoint, not a UI. Your point about context overhead is huge. Before I connected my docs and API spec to an MCP server, every dev session started with 15 minutes of "here's what this codebase does." Now it just knows. That 30-40% velocity increase sounds about right — maybe even conservative. The part about onboarding really resonates too. You solved "users don't read docs" by giving them a chat interface. I'm solving the same problem from the agent side — agents don't read docs either, they need structured endpoints they can query. Same principle, different audience. Curious about one thing: when your AI makes API calls on behalf of merchants, how do you handle edge cases where the AI misinterprets intent? Like if someone says "remove my Thursday availability" but means just one Thursday, not every Thursday?

    1. 1

      Exactly! Your MCP endpoint approach is the right mental model - the interface IS the product when agents are the consumers.

      And yeah, you're touching on something I didn't get into in the post: the context management problem is brutal. It's not just "feed AI your docs once" — it's keeping that context current. Every time I ship a feature, change an API endpoint, or update a flow, the AI needs to know. Otherwise it confidently explains functionality that no longer exists.

      My current solution: orchestration across multiple AI agents, each with a different model and a specific role, plus caching. The orchestrator, the one actually talking to the user, runs on a high-reasoning model with thinking enabled. That alone handles 70–80% of ambiguity cases. It's more expensive, but for something that's acting on a user's calendar, you want it to think, not just pattern-match.

      On your Thursday question specifically: the answer is instructions. You literally write into the system prompt: "before modifying recurring events, always confirm whether the user means this occurrence or all future ones." The AI doesn't infer this on its own - you have to teach it the edge cases you've already identified. Then it asks for confirmation before doing anything irreversible.

      It's not magic, just prompt engineering + the right model tier + confirmation gates for any destructive action. With those three things, you get to ~90% reliability, which is actually good enough for most real-world use.

  2. 1

    If you're building something with embedded AI, I'd be curious how you approached it :)