4
1 Comment

💻 Local AI: China Beats US, $2,000/mo Telegram Bot, Virtual Companies

In this Trends.vc Report, we talk about open LLM leaderboards, controversial tasks, unfiltered creativity and more.

💎 Why It Matters

Censorship lowers leverage. Privacy limitations lower trust.

Local AI shifts control from OpenAI, Microsoft and Google to the people.


🔍 Problem

You pay for centralized AI tools that tell you what you can and cannot do.

While saving your documents and innermost thoughts on their servers.


💡 Solution

Local AI gives you more control over your data and usage.

It becomes your advisor, not a supervisor.


🏁 Players

Local AI Models

  • Llama 3 • Local AI model from Meta
  • Gemma • Open-source AI models from Google
  • Qwen2 • Open-source model that beats all other open-source models
  • DeepSeek Coder • AI models that write, understand and complete code
  • Stable Diffusion • Text-to-image generation model for photo realistic images

☁️ Opportunities

  • Build privacy-first, client-side apps. Privacy is a strong selling point for sensitive use cases.
  • WriteUp locked privacy behind a paid plan. It collects data from free users only.
  • Pieces is a local-first coding assistant that protects your codebase. It uses your local resources to give code suggestions.
  • Pieter Levels grew TherapistAI to $2,000/mo. He says local LLMs are perfect for sensitive use cases and plans to turn it into a client-side chatbot.
  • Build a user-friendly interface to help non-technical users connect, train and use local AI. Great UI leads to great UX.
  • LM Studio lets you build, run and chat with local LLMs.
  • WebLLM is an in-browser AI engine for using local LLMs.
  • TypingMind lets you self-host local LLMs on your own infrastructure.
  • Make tutorials to help people build, run and use local AI models.
  • Matthew Berman shows how to run any AI model with LM Studio.
  • Zoltan C. Toth teaches The Local LLM Crash Course. He’s got 2,769 students.
  • Sam Witteveen made a series of tutorials on running local AI models with Ollama.
  • Eden Marco teaches how to build LLM apps with LangChain. He’s got 56,404 students.
  • Sharath Raju teaches how to use LangChain with Llama 2 and HuggingFace. He’s got 10,657 students.

🔮 Predictions

  • We’ll see virtual companies of AI agents that work together locally.
  • Camel lets you use open-source AI models to build role-playing AI agents.
  • OpenAGI lets you use local models to build collaborative AI teams. Here’s an example of an AI team that writes blogs.
  • MetaGPT lets you build a collaborative entity for complex tasks. It works best with commercial models, but you can use open-source AI too.
  • ChatDev uses several AI agents with different roles to build software. They collaborate by “attending” specialized seminars on design, coding, testing and more.
  • We’ll be able to build AI apps visually, without code. This will let non-technical users build complex apps for their workflows.
  • Flowise lets you build custom LLM flows and AI agents.
  • Langflow offers a visual interface for building AI-powered apps.
  • Obviously AI lets you build production-ready AI apps without code.
  • Lack of censorship will become a better selling point. Users will prefer unfiltered creativity over censored tools that refuse to do controversial, yet legal tasks.
  • Venice is a privacy-first chatbot that stores chats in your browser.
  • xAI is an AI lab led by Elon Musk. It released Grok-1, an open-source and uncensored alternative to OpenAI.
  • Perplexity made uncensored AI models that outperformed GPT-3.5 and Llama 2. Paired with browser access, they went too far. Since they weren’t open-source, they were taken down in 6 months.

🏔️ Risks

  • Hardware Requirements • If you’re serious about running AI models locally, you may need to buy a new computer. Depending on your needs and preferences, this may cost a few thousand dollars.
  • UX Issues • You may not be able to use multiple models simultaneously. You can open ChatGPT, Claude and Gemini in different tabs. But running more than one local AI model with billions of parameters can be impossible.

🔑 Key Lessons

  • The performance gap between local and cloud AI is closing.
  • Local AI is self-sufficient. You can ask for help anytime, anywhere, as long as you have your device with you. No internet connection required.

🔥 Hot Takes

  • Governments will regulate local AI on par with centralized models. They still pose risks similar to proprietary models.
  • China will beat the US in the AI race. Chinese open-source models already beat open-source models from the US. Eventually, Chinese proprietary models will catch up too. The US will try to limit the public access to AI research. Such concerns have already been stated.

😠 Haters

“I need more expensive and powerful hardware to run local AI models.”
This is the main tradeoff for local AI at the moment. But it’s becoming more performant. See how llama.cpp lets you run them on consumer devices and how Apple is doing this on a grand scale. This may be an inflection point for hardware and local AI.

Setup and onboarding is hard. I can’t just visit a URL.”
User experience with local AI is a solvable problem. We’re getting there with open-source tools that make setting up local AI easier. Ollama lets you set up Llama 3 in 10 minutes.

“Local AI models perform worse than AI models made by tech giants.”
Open-source AI models can be a little worse, but a lot more private and less censored. Depending on your use case, it can be wise to sacrifice quality without giving up your privacy.

Providing support for models running locally sounds impossible.”
This is another tradeoff of local LLMs. Unless the model becomes unusable, users can use an AI model to debug another AI model. This guy uses local AI models as copilots for coding copilots.

“Local AI models aren't a panacea for AI-related data privacy issues.”
This is the risk of storing data in digital form. “Private”, local AI may not protect your data if your computer is compromised.


🔗 Links

  1. Open-Source AI Is Wild • The thread behind this report.
  2. Building a Report on Local AI • The tweet behind this report.
  3. Open LLM Leaderboard • 100+ open-source AI models with performance tests.
  4. Why I Use Open Weights LLMs Locally • The benefits of using locally hosted open LLMs.

📁 Related Reports

  • Open-Source AI • Learn from and build on each others’ work.
  • Niche AI Models • Do specific tasks more accurately and efficiently.
  • Data as a Service • Gain a competitive edge by fueling your decisions with the right data.
  • AI Agents • Autonomous agents are the natural endpoint of automation in general.
  • Prompt Engineering • Learn how to direct AI to get more accurate results.

Become a Trends Pro Member to get the full report on Local AI.

Do you want to get the next free Trends.vc Report? Join 60,000+ founders discovering new markets and ideas.

on July 29, 2024
  1. 1

    If you're tired of the endless back-and-forth with your Ollama (Local AI) Client just to repeat the same task over and over I suggest checking out my repository on GitHub (github/albertocubeddu/extensionOS)

Trending on Indie Hackers
I spent $0 on marketing and got 1,200 website visitors - Here's my exact playbook User Avatar 54 comments Veo 3.1 vs Sora 2: AI Video Generation in 2025 🎬🤖 User Avatar 26 comments Codenhack Beta — Full Access + Referral User Avatar 21 comments I built eSIMKitStore — helping travelers stay online with instant QR-based eSIMs 🌍 User Avatar 20 comments 🚀 Get Your Brand Featured on FaceSeek User Avatar 18 comments Day 6 - Slow days as a solo founder User Avatar 16 comments