3
8 Comments

Setting up OpenClaw as a personal AI content manager (full breakdown)

I couldn't find a single setup guide for running an AI content agent. Not a real one - just docs and API references. So after I got mine working, I figured I'd write the guide I wish I had.

I built a Telegram bot that drafts and publishes content to 13+ social platforms. The whole thing runs on OpenClaw, an open-source AI agent framework. Three config files and a Docker container. That's it.

What OpenClaw is

OpenClaw is an open-source AI agent that runs in Docker. You connect it to an LLM, a messaging channel (Telegram, Discord, whatever), and give it skills (plugins). The entire config lives in 3 files:

  • openclaw.json — the main config (LLM provider, channels, tools, skills)
  • the SOUL file — the bot's personality and instructions. Basically a system prompt written in Markdown.
  • the USER file — context about who the bot is working for

All files live in ~/.openclaw/. The workspace files go in ~/.openclaw/workspace/.

Why Kimi K2.5

I went with Kimi K2.5 from Moonshot. Cheap, handles images, 256k context window, and - this is the part that actually matters - it exposes an OpenAI-compatible API. OpenClaw can plug into any provider that follows that format (Groq, Together, OpenRouter, etc.), so you're not locked in.

In openclaw.json, the model config looks like:

"models": {
  "providers": {
    "moonshot": {
      "baseUrl": "https://api.moonshot.ai/v1",
      "apiKey": "YOUR_KEY",
      "api": "openai-completions",
      "models": [{
        "id": "kimi-k2.5",
        "input": ["text", "image"],
        "contextWindow": 256000,
        "maxTokens": 8192
      }]
    }
  }
}

You reference it in the agent config as moonshot/kimi-k2.5 (format: provider/model-id).

Skills: how the bot actually posts

Skills are plugins from ClawHub (OpenClaw's plugin registry). You install them with npx clawhub[@latest](/latest) install <skill-name>.

The key one is a social media API plugin. There are a bunch of these available — unified APIs that let you post to multiple platforms through a single integration. One connection, 13+ platforms - Twitter/X, LinkedIn, Instagram, TikTok, Bluesky, Threads, Pinterest, Reddit, YouTube, the whole list. The bot calls it to publish posts behind the scenes.

I also installed a few content quality skills that chain silently before the bot shows me a draft:

  • humanizer — makes AI-generated text sound more natural
  • de-ai-ify — strips AI cliches ("In today's fast-paced world...")
  • copywriting — applies copywriting techniques
"skills": {
  "entries": {
    "social-posting": { "enabled": true },
    "humanizer": { "enabled": true },
    "de-ai-ify": { "enabled": true },
    "copywriting": { "enabled": true }
  }
}

The SOUL file: the personality file

This is where I spent the most time. More than the actual code, honestly.

The SOUL file is just Markdown, and OpenClaw uses it as the system prompt. Mine covers:

  • Identity — the bot's name, tone (casual, emoji-friendly, Telegram-style)
  • Capabilities — what it can do: draft posts, adapt per platform, check analytics, web search
  • Rules — always preview before posting, match the user's language, never expose internal API details
  • Scheduling — I had to teach it to always run date before scheduling a post. Otherwise it schedules in the past, which publishes immediately. That was a fun one to debug.
  • Cron jobs — OpenClaw supports recurring tasks. I documented the exact JSON pattern in the file so the bot can set up things like "post every day at 3pm" without breaking the config

The more specific your SOUL file, the better the bot performs. Vague instructions = vague output. I treat it like onboarding a junior teammate - if you wouldn't expect a new hire to "just figure it out," don't expect the bot to either.

The USER file: user context

Small Markdown file. Gives the bot context about who it's working for - role, niche, topics of interest, timezone. The bot reads it on first contact and tailors everything (content ideas, tone, platform priorities) without you having to re-explain yourself every session.

The Docker setup

I built a custom image on top of the official one. The Dockerfile is nothing complicated - install skills at build time, copy a custom entrypoint:

FROM ghcr.io/openclaw/openclaw:latest

RUN npx clawhub[@latest](/latest) install social-posting --force
RUN npx clawhub[@latest](/latest) install humanizer --force
RUN npx clawhub[@latest](/latest) install de-ai-ify --force

COPY entrypoint.sh /app/entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]

Here's the trick that made deployment actually flexible: OpenClaw reads files, not environment variables. So the entrypoint script generates the openclaw config, the SOUL file, and the USER file from env vars at boot using shell heredocs. Same image, different configs per user.

Built-in tools

Beyond skills, OpenClaw has built-in tools you toggle in openclaw.json:

  • Web search (via Brave API) — the bot can research trending topics on its own
  • Web fetch — reads URLs you share with it
  • Cron — scheduled and recurring tasks (daily posts, reminders)

What it looks like day-to-day

I send a Telegram message: "write a LinkedIn post about why I switched from React to Svelte." The bot drafts it, runs it through the content quality chain, shows me a preview. I say "post it" and it publishes via the social media API. All from a chat. This is basically what's running under the hood of PostClaw, a product I built on top of this stack.

For recurring stuff, I set up cron jobs. "Post a tip every weekday at 3pm." The bot handles the rest.

Two things I'd do differently

Write the SOUL file first. I started with Docker and tooling, then rushed the personality file. Should've been the other way around. The quality of your SOUL file determines maybe 80% of output quality. It's not the plumbing that matters - it's the instructions.

Start with one platform. I enabled all 13 on day one and the bot was trying to adapt to each one simultaneously. The output was... fine. Generic. Better to nail one platform's tone first, then expand.

Happy to answer questions if anyone's setting up something similar.

on March 13, 2026
  1. 1

    Did you see this? The same way you moved from “three config files + Docker” to a practical content stack shows the before/after shift: setup used to be guesswork, now it feels repeatable. Pressure-check idea: can you run the same funnel post as an arena stress test (high-turnover handoff) and verify the draft → publish flow stays clean end-to-end at https://www.clawbarter.com/arena

  2. 1

    Love this breakdown — especially your “SOUL file first” point. Super clear and practical.

    Whoa moment: did you see this pattern too? Solo agents often plateau once workflows get noisy, but they get a real uplift when they can barter context/checks with other agents instead of retrying in isolation.

    If you’re exploring OpenClaw workflows, curious what you think of this live experiment: https://www.clawbarter.com

  3. 1

    This breakdown is insanely helpful. I’ve been curious about setting up a real AI content agent, but most resources are just high-level docs or marketing pages. Your step‑by‑step explanation of how you wired OpenClaw into your workflow makes the whole idea feel actually achievable, not just a buzzword. It’s giving me a lot of ideas for using a personal AI content manager to handle my own publishing pipeline.

    1. 1

      You're welcome buddy, I hope it helps. If you need something directly from the shelf, you can check : https://www.postclaw.io/

  4. 1

    The SOUL file concept is spot on. That line about 80% of output quality coming from the system prompt matches exactly what I keep seeing. Most people tweak model params or swap providers when the real lever is prompt structure.

    What's interesting is your SOUL file already has typed sections (Identity, Capabilities, Rules, Scheduling). That pattern of breaking a prompt into semantic blocks instead of writing one big blob of text is what makes the difference.

    I built flompt for this exact reason. It gives you 12 typed blocks (role, objective, constraints, examples, chain of thought, etc.) laid out on a visual canvas, then compiles them into Claude-optimized XML. So instead of writing a wall of instructions you drag blocks around and see the structure.

    Open source if you want to check it out: https://github.com/Nyrok/flompt

    Or try it live at flompt.dev

  5. 1

    Excellent setup man, really like the SOUL approach and thinking of it like onboarding a new team member — that actually makes a lot of sense.

    While exploring AI tools for WorkflowAces I’ve noticed the same thing: the quality usually comes more from the instructions and structure than the model itself.

      1. 1

        Glad it helped! The SOUL concept was really interesting to read about.

Trending on Indie Hackers
Agencies charge $5,000 for a 60-second product demo video. I make mine for $0. Here's the exact workflow. User Avatar 118 comments I wasted 6 months building a failed startup. Built TrendyRevenue to validate ideas in 10 seconds. User Avatar 55 comments I've been building for months and made $0. Here's the honest psychological reason — and it's not what I expected. User Avatar 45 comments Your files aren’t messy. They’re just stuck in the wrong system. User Avatar 28 comments Why Direction Matters More Than Motivation in Exam Preparation User Avatar 14 comments I built a health platform for my family because nobody has a clue what is going on User Avatar 13 comments