2
3 Comments

Your AI Agent and You Should Share a Brain. Here's How I Built That.

A few days ago someone messaged me on Reddit. They were frustrated: they'd been deep in a Claude session, the conversation had real momentum, and they needed to switch to a different model. Starting over meant re-explaining everything: the project, the context, the decisions already made. They wanted to know if there was a way to hand off conversations between sessions.
It's a problem every heavy AI user hits eventually. And the timing was good, because I'd just shipped exactly the thing that solves it.

The Problem With "AI Memory" Solutions

AI memory tools are having a moment. If you search GitHub or ProductHunt right now, you'll find a handful of projects all tackling the same core problem: LLMs are stateless. Every new session starts blank. The model that helped you architect a feature yesterday has no idea you exist today. Browse an AI subreddit and you'll see half a dozen carbon copied posts proclaiming to have solved the memory problem.

Yes, the solutions that exist are mostly technically impressive. They use vector stores, graph databases, semantic retrieval. Some are genuinely clever.
But they all share the same flaw: they're a new thing you have to think about. A separate service to run. A config file to maintain. A system that lives next to your workflow rather than inside it. For developers who are already managing dotfiles, local servers, and half a dozen dev tools, adding another layer of infrastructure to keep an AI informed is... not the frictionless future we were promised.

I built something different. The memory system in Vist is invisible, because it lives inside the notes app you'd be using anyway.

The Setup Takes Two Minutes (Not Two Hours)

Before I get into what it does, let me tell you what connecting it looks like — because this matters.
Most MCP memory solutions require a local install. You clone a repo, install Node or Python dependencies, edit a JSON config file to point your MCP client at a local server, and hope nothing breaks when you update your OS. That's fine if you enjoy that sort of thing. Many developers do.

Vist's MCP server runs remotely, and it uses OAuth 2.1 — the same standard your bank uses to let apps connect to your account. The setup in Claude.ai looks like this:

  1. Go to the (new) Customize page.
  2. Click Connections > New > Add Custom Connector
  3. Paste in our URL: https://app.usevist.dev/mcp
  4. Click through the OAuth flow — you're authorising Claude to talk to your Vist account.

Done.

No terminal. No config files. No local process to keep running. Claude.ai, Claude Desktop, OpenCode, Cursor — any MCP-compatible client connects the same way.

The OAuth implementation follows the MCP authorisation spec (2025-03-26+), which means it'll work with any client that properly implements the protocol, today and as the ecosystem matures. The Reddit user I mentioned earlier set it up in the time it took him to finish a cup of coffee. That's the experience I was aiming for.

What "Shared Second Brain" Actually Means

Here's the concept that makes Vist different from a standalone memory tool.

Vist is a notes and tasks app. You write in it. You capture project context, meeting notes, decisions, technical findings. You create tasks that link back to the notes they came from. It's a second brain in the classic sense — an external system that holds the things your head can't.

The MCP integration doesn't add a memory layer on top of that. It exposes the same knowledge base to your AI agents. When Claude connects to Vist, it reads the same notes you wrote. When it saves something to memory, it saves it as structured content in your account — content you can read, edit, and search yourself.
You and your agents aren't running separate systems that need to stay in sync. You're sharing one.

This means:

  • Notes you write about a project are automatically available to your AI assistant
  • Decisions the AI records are visible to you in your normal workflow
  • Tasks extracted from notes are the same tasks the AI can check off
  • Memory doesn't disappear when you close a tab or switch models

The AI isn't the smart thing at the centre of a complicated graph database. It's just a well-informed collaborator with access to your notes.

A Day in the Life (Mine)

Let me make this concrete, because I use Vist to build Vist, which means I test this workflow every single day.

This morning I opened a new Claude Code session to work on a background job that processes MCP tool calls. Before I typed a single line, Claude called load_context — a Vist MCP tool that retrieves active project state, recent decisions, current blockers, and what I was working on last time. Within seconds Claude had a complete picture: the architecture decisions, the gems in use, the testing conventions, what the last session completed and where it stopped.

We worked through the implementation together. When we hit an architectural decision worth keeping — how to handle token expiry for OAuth flows — Claude called record_memory with type decision_log, tagged it to the vist-app project. That decision is now in my Vist account. I can read it in the app. Future sessions, any model, will pick it up automatically.

Ninety minutes later, I ran out of tokens on my Claude plan. Classic. The session ended mid-task.

I opened OpenCode, which I use with Gemini API billing for less complex tasks, or challenges that require huge context windows. Gemini too called load_context. It greeted me with a summary of the current project state, noted the OAuth token expiry work was in progress, and asked if I wanted to continue. It knew what Claude had been doing because Claude had updated the project state before the session ended.

We finished the work. Gemini ran the tests, committed the code, called update_project_state with what changed and what was next.
I closed the laptop. The context survived.

Tomorrow, whatever model I open with, the handoff will be just as clean.

The Memory Structure (For the Technically Curious)

Vist's agent memory system uses five typed memory categories, each with different persistence and retrieval behaviour:

project_state — The living document of a project. Current task, recent changes, next steps, blockers. Updated at meaningful checkpoints. This is what load_context prioritises.
decision_log — Architectural and product decisions with rationale. Append-only by convention. Invaluable when you come back to a project after weeks away and wonder why you made a particular choice.
learned_facts — Stable factual knowledge about your setup. "This project uses Kamal for deployment." "The test suite uses FactoryBot, not fixtures." Things that rarely change but are expensive to rediscover.
active_context — Short-lived context that expires automatically. "Currently debugging this specific issue." "Waiting on this API response." Clears itself so it doesn't pollute future sessions.
preferences — User-specific behaviour settings for the agent. Tone, formatting, tool usage patterns.

The load_context tool synthesises across all five types, applies session-mode logic (morning briefing vs. focus mode mid-day), and returns a structured context that fits comfortably within the model's working attention. It's not a context window dump — it's a prioritised briefing.

What This Isn't

It's not a replacement for good project documentation. If you don't write things down, the memory system has nothing to work with. The discipline of maintaining a second brain — capturing decisions, writing notes, keeping project state current — is still on you (although smart models tend to remind you, or suggest taking notes!). Vist makes it easier and more useful, but it doesn't magic up context from nothing.

It's also not magic project management. Vist is deliberately simple: notes, tasks, folders, labels, semantic search. If you need Gantt charts and sprint velocity metrics, there are better tools. The opinionated simplicity is a feature, not a limitation — it means less cognitive load when you're deep in a problem and just need to capture something quickly.

The Bigger Picture

What I'm trying to build is a productivity app where the line between "what you know" and "what your AI assistant knows" is as thin as possible. Not because AI is cool (though it is), but because the context-switching cost of re-explaining yourself to a stateless model is genuinely painful and genuinely solvable.

The technical complexity is in the server, the MCP implementation, the OAuth flow, the memory architecture. The user experience should feel like nothing — like your assistant just... remembered.

That Reddit user who wanted to hand off a conversation between sessions? He connected Vist, started a new session, and Claude had enough context to continue the work. He didn't transfer a transcript. He didn't paste a summary. He just started a new session with a model that already knew what mattered.
That's the thing I'm building.


Vist is live at https://usevist.dev. Free tier available, no credit card required. MCP connection instructions are in the onboarding flow.

I'm building this in public — happy to answer questions about the MCP implementation, the OAuth setup, or the Rails architecture in the comments.

on March 2, 2026
  1. 2

    Doesn't sound bad, but will it be easy enough to warrant a place next to the competition of big names like Notion or Obsidian? With a little configuration you use those for similar purposes, can't you?

    1. 1

      Well. I hope Vist will win over some people in the product and builder community because it's faster, more transparent, and the memory system is really part of the design, not something bolted on that forces you to change your workflow.

      Fingers crossed that what I build really is as simple to use as I think. 🤞🏻

  2. 1

    Did you kept GDPR, compliance etc. in mind when building this SaaS? Or is this not important for you?

Trending on Indie Hackers
Your AI Product Is Not A Real Business User Avatar 112 comments Stop Building Features: Why 80% of Your Roadmap is a Waste of Time User Avatar 54 comments The Clarity Trap: Why “Pretty” Pages Kill Profits (And What To Do Instead) User Avatar 34 comments I built an enterprise AI chatbot platform solo — 6 microservices, 7 channels, and Claude Code as my co-developer User Avatar 33 comments I got let go, spent 18 months building a productivity app, and now I'm taking it to Kickstarter User Avatar 22 comments I went from 40 support tickets/month to 8 — by stopping the question before it was asked User Avatar 17 comments