Hey IH! I'm Bikash, and over the last 4 months (started late October 2025) I built Cuneiform Chat — an AI agent platform that lets businesses deploy knowledge-base chatbots across Telegram, WhatsApp, Discord, Slack, web widgets, and more. It's live in production now.
I want to share the real story — architecture, mistakes, and what it actually looks like to build enterprise software with an AI coding partner.
The system is 9 repos — 6 backend services, 2 frontends, and a shared SDK:
Backend services:
Frontends:
Shared:
6 MongoDB databases, Redis isolated by DB number per service, Pinecone for vectors with per-tenant namespace isolation, S3 for document storage.
I made it multi-tenant from day one. Every query filters by organization. Every S3 path, every Pinecone namespace, every Redis key — all scoped to the tenant.
Cost: every feature takes longer. Every test has to verify isolation. Every AI-generated code change needs to understand the pattern.
Value: any organization's data is physically impossible to access from another org's context. For B2B, this is table stakes. Building it in from the start was far cheaper than retrofitting.
Claude Code is my primary development partner. Not autocomplete — a collaborator that reads my codebase, understands the architecture, and writes production code.
I maintain ~30 reference docs in a .claude/ directory — architecture decisions, service patterns, API conventions, feature specs.
A solo developer with an AI coding partner can maintain a 6-microservice architecture that would normally need a team of 5-8. The tradeoff is heavy investment in documentation — not for humans, but for your AI to maintain context across sessions.
Not marketing from day one. I spent months building in silence when I should have been talking to customers and creating content from the start. The product was production-ready long before anyone knew it existed. If you're a builder, the instinct is to keep building. Fight that instinct. The best time to start marketing was the day I wrote the first line of code.
Skipping integration tests early. Unit tests caught logic bugs. But the production bugs were integration bugs — wrong field names between services, mismatched Redis keys, routes that worked in isolation but failed with auth middleware. Write integration tests from the start.
Over-engineering billing before having paying customers. I extracted billing into its own microservice with quota enforcement, credit tracking, and webhook handlers — before a single customer had paid me anything. That entire service could have been a simple Polar.sh checkout link and a boolean flag for months.
Not building in public sooner. I had a compelling story the entire time — solo dev, 6 microservices, AI coding partner — and I told nobody. Every architectural decision, every production bug, every late night debugging session was content I never published. Starting this now, months after launch, instead of from day one.
Multi-tenant from day one. Already said it. Worth repeating.
Building a custom tracing service. Every LLM call, every RAG query, every API request across all services gets traced to a centralized dashboard. When you're solo, you can't afford to spend hours hunting bugs across microservices. The tracing service with its cost tracking (24 recording points) paid for itself within the first week of production debugging. I can see exactly which step in a multi-service pipeline failed, what it cost, and how long it took. Same thinking led me to build a test dashboard that orchestrates and monitors test runs across all repos from a single UI — when you don't have a QA team, you build one. If I were starting over, these developer tools would be among the first things I'd build.
Config-as-code for tier limits. All subscription tier configs live in YAML files in the shared SDK. No database queries, view only on platform admin panels. Change YAML, deploy, done. Every service reads the same config, so they can't disagree about what a "Plus" plan includes.
Building Saba — the platform's own AI assistant. Saba is a meta-feature: an AI assistant that knows the platform itself. It answers customer questions about their account, usage, billing, agent configuration — using the same agent and RAG infrastructure that powers the customer-facing chatbots. It composes tools dynamically (subscription lookup, usage stats, knowledge search, configuration help) based on the question. Essentially, the platform eats its own dog food. Customers get instant self-service support, and I don't have to be available 24/7 for basic questions. For a solo founder, that's not a nice-to-have — it's survival.
Fire-and-forget for secondary operations. Tracing, analytics, audit logging — none of these block the user's request. Try the operation, log a warning if it fails, move on. User experience is sacred.
The system is live in production. The focus has shifted from building features to finding customers and creating content. The backlog has things like a public REST API and CRM integrations, but those are driven by customer requests, not my desire to build more things.
The hardest part of building a SaaS alone isn't the code. It's the context-switching — code, infrastructure, support, content, growth — all in the same day, every day. The system is the easy part. The business is the hard part.
If you want to see what a solo-built enterprise AI platform looks like:
There's a free tier if you want to spin up an agent and test it yourself.
What's the hardest architectural decision you've faced as a solo developer? The one where both options seemed reasonable and you just had to pick? Would love to hear your stories.
Impressive build — 6 microservices solo with AI pair programming is no joke.
The "not marketing from day one" and "over-engineering billing before paying customers" hit home. I made the same $20K mistake: built for 3 months, zero sales.
Now I'm focused on validation before code. You mentioned REST API and CRM integrations are backlog items "driven by customer requests" — how are you validating which integration (Salesforce? HubSpot? Slack?) is worth the engineering effort before a customer explicitly asks for it?
I'm testing a method to validate B2B features with video prototypes — show the workflow, test demand, before building the microservice.
Would love to hear how you're tackling prioritization now, or if you'd be interested in testing this approach on your next integration.
The multi-tenancy decision is fascinating. I love how you articulated the tradeoff: "every feature takes longer" but retrofitting would be "far more expensive." That's the kind of architectural bet that separates production-grade SaaS from side projects.
Your point about building Saba (the AI assistant for your AI platform) is brilliant - dogfooding at its finest. For solo founders, automating tier-1 support isn't just nice-to-have, it's survival like you said.
Hardest architectural decision I faced: whether to build a custom permissions system vs using a hosted auth provider like Auth0. The custom route gave us fine-grained control but cost us weeks. In hindsight, starting with the hosted solution and migrating later might have let us validate the core product faster.
The tracing service you built sounds like it paid for itself immediately. Observability is usually an afterthought for solo devs, but you made it first-class. Smart move.