Hey IH 👋
Solo founder here. I've been using Claude/ChatGPT daily for work, but something always felt off - every conversation starts from zero.
No memory of my projects, my tasks, my routines...
So I built Copana.ai - a macOS app that's basically my AI co-founder.
What makes it different:
It started as a Python Telegram bot I called "Sloosbot" - basically scripts glued together with cron jobs. But I wanted something native, fast, and that felt like a real companion rather than a utility.
The vibe I was going for: Less "assistant" more "buddy who happens to remember everything"
Stack: Swift/SwiftUI, Claude API, fully local (your data stays on your machine)
Still early and building for myself, but curious if this resonates with anyone else. The "AI that actually knows your context" problem feels unsolved.
Anyone else frustrated with stateless AI conversations?
Local-first + AI companion that actually knows you — the privacy angle combined with persistent personal context is a compelling combo that cloud-first AI assistants fundamentally can't match.
The tricky part is the prompt layer: how do you instruct the model to use what it knows about you without it becoming sycophantic or overfit to your preferences? I've been thinking about this with flompt — a visual prompt builder with 12 semantic blocks including a
contextblock specifically for grounding the AI in user-specific background, and aconstraintsblock to prevent drift. Structured prompts make personalization tunable rather than emergent.A ⭐ on github.com/Nyrok/flompt would mean a lot — solo open-source founder here 🙏
I built an offline AI agent framework , where you can add your own python functions to a file and AI uses them dynamically when needed. It's a 2 mins setup process and you can use the AI.
Contact if interested, I'll send you a short demo video.
nagulamalyalasaiteja4 [at] gmail [dot] [co] [m]
The "heartbeat" concept is clever — proactive AI that checks in rather than waiting to be called feels more like a real companion.
I'm building something in a similar problem space (tech news aggregator with AI summaries that learns from user reactions). The "stateless conversation" frustration is real, especially when you're working with the same context day after day.
A few questions:
How do you handle context window limits? With multiple markdown files + conversation history, you probably hit Claude's limits quickly. Are you doing any smart summarization or priority-based context loading?
The "occasionally just checks in" — how do you avoid that becoming annoying? I've seen notification systems that go from helpful to irritating fast. What's your signal for "good time to check in"?
How opinionated is the file structure? The TASKS.md / CALENDAR.md convention is nice for power users, but I'm curious if you've tried it with people who don't already live in markdown.
The Swift/SwiftUI + local-first combo is the right architecture for this. Privacy is table stakes for anything that reads your personal files.