1
1 Comment

I Built a Code AST MCP That Saves 70% Tokens and speed up coding agent that Went Viral (90K+ Views on X)

Last week, I open-sourced a lightweight Code MCP server that uses AST (Abstract Syntax Tree) parsing to give coding agents semantic understanding of your codebase. It went viral on X with 90K+ views.

Here's the tweet that started it all:

https://x.com/RoundtableSpace/status/2031366453153157139
https://x.com/GithubProjects/status/2031233621382853030

The Problem: Coding Agents Are Burning Tokens

The agent doesn't need your whole file. It needs to know what functions exist, what classes are defined, and how they relate to each other.

The Solution: AST-Based Semantic Code Search

I built cocoindex-code - a super lightweight embedded MCP that:

  • Parses your code into ASTs using tree-sitter, extracting meaningful chunks (functions, classes, methods)
  • Creates semantic embeddings of those chunks
  • Lets your coding agent search by meaning, not just text matching
  • Only re-indexes changed files - built on a Rust-based incremental indexing engine

The result? 70% token savings and noticeably faster coding agent responses.

1-Minute Setup - No Config Needed

For Claude Code:

pipx install cocoindex-code
claude mcp add cocoindex-code -- cocoindex-code

For Codex:

codex mcp add cocoindex-code -- cocoindex-code

That's it. No database, no API keys, no config files. It just works.

How It Works Under the Hood

  1. Tree-sitter parsing breaks your code into semantic chunks (functions, classes, etc.) across 20+ languages
  2. Local embedding model (SentenceTransformers) creates vector representations - completely free, no API key needed
  3. SQLite + vector search stores everything locally and portably
  4. Incremental indexing via CocoIndex (Rust engine) means only changed files get re-processed

When your agent needs to find code, it calls the search MCP tool with a natural language query and gets back exactly the relevant code chunks with file paths and line numbers.

Why It Went Viral

I think people resonated with a few things:

  1. Real pain point - everyone using coding agents feels the token burn
  2. Zero friction - one pip install and one MCP add command
  3. No vendor lock-in - works with Claude, Codex, Cursor, or any MCP-compatible agent
  4. Open source (Apache 2.0) - you can inspect every line of code
  5. No API keys required - the default embedding model runs locally for free

Supported Languages

Python, JavaScript/TypeScript, Rust, Go, Java, C/C++, C#, Ruby, Kotlin, Swift, SQL, Shell, and more. It uses tree-sitter grammars so adding new languages is straightforward.

What's Next

We're actively working on:

  • Better embedding models optimized for code (try nomic-ai/CodeRankEmbed with a GPU)
  • Enterprise features for large codebases and shared indexing across teams
  • More MCP tools beyond search

The repo is at github.com/cocoindex-io/cocoindex-code - 670+ stars and growing fast.

Built with CocoIndex, our open-source Rust-based data indexing framework.

Would love to hear your experience if you try it out. Drop a comment or open an issue on GitHub!

posted to Icon for group Product Launch
Product Launch
on March 12, 2026
  1. 1

    Semantic retrieval feels like a real unlock.

    The part I keep feeling next is that once the agent has enough context to act, the bottleneck stops being retrieval and starts being the execution boundary. Better search plus one visible plan before local mutation feels stronger than either alone.

    Really nice job keeping the setup this short.

Trending on Indie Hackers
Never hire an SEO Agency for your Saas Startup User Avatar 93 comments A simple way to keep AI automations from making bad decisions User Avatar 66 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments Are indie makers actually bad customers? User Avatar 36 comments We automated our business vetting with OpenClaw User Avatar 34 comments