I couldn't find a single setup guide for running an AI content agent. Not a real one - just docs and API references. So after I got mine working, I figured I'd write the guide I wish I had.
I built a Telegram bot that drafts and publishes content to 13+ social platforms. The whole thing runs on OpenClaw, an open-source AI agent framework. Three config files and a Docker container. That's it.
OpenClaw is an open-source AI agent that runs in Docker. You connect it to an LLM, a messaging channel (Telegram, Discord, whatever), and give it skills (plugins). The entire config lives in 3 files:
All files live in ~/.openclaw/. The workspace files go in ~/.openclaw/workspace/.
I went with Kimi K2.5 from Moonshot. Cheap, handles images, 256k context window, and - this is the part that actually matters - it exposes an OpenAI-compatible API. OpenClaw can plug into any provider that follows that format (Groq, Together, OpenRouter, etc.), so you're not locked in.
In openclaw.json, the model config looks like:
"models": {
"providers": {
"moonshot": {
"baseUrl": "https://api.moonshot.ai/v1",
"apiKey": "YOUR_KEY",
"api": "openai-completions",
"models": [{
"id": "kimi-k2.5",
"input": ["text", "image"],
"contextWindow": 256000,
"maxTokens": 8192
}]
}
}
}
You reference it in the agent config as moonshot/kimi-k2.5 (format: provider/model-id).
Skills are plugins from ClawHub (OpenClaw's plugin registry). You install them with npx clawhub[@latest](/latest) install <skill-name>.
The key one is a social media API plugin. There are a bunch of these available — unified APIs that let you post to multiple platforms through a single integration. One connection, 13+ platforms - Twitter/X, LinkedIn, Instagram, TikTok, Bluesky, Threads, Pinterest, Reddit, YouTube, the whole list. The bot calls it to publish posts behind the scenes.
I also installed a few content quality skills that chain silently before the bot shows me a draft:
"skills": {
"entries": {
"social-posting": { "enabled": true },
"humanizer": { "enabled": true },
"de-ai-ify": { "enabled": true },
"copywriting": { "enabled": true }
}
}
This is where I spent the most time. More than the actual code, honestly.
The SOUL file is just Markdown, and OpenClaw uses it as the system prompt. Mine covers:
date before scheduling a post. Otherwise it schedules in the past, which publishes immediately. That was a fun one to debug.The more specific your SOUL file, the better the bot performs. Vague instructions = vague output. I treat it like onboarding a junior teammate - if you wouldn't expect a new hire to "just figure it out," don't expect the bot to either.
Small Markdown file. Gives the bot context about who it's working for - role, niche, topics of interest, timezone. The bot reads it on first contact and tailors everything (content ideas, tone, platform priorities) without you having to re-explain yourself every session.
I built a custom image on top of the official one. The Dockerfile is nothing complicated - install skills at build time, copy a custom entrypoint:
FROM ghcr.io/openclaw/openclaw:latest
RUN npx clawhub[@latest](/latest) install social-posting --force
RUN npx clawhub[@latest](/latest) install humanizer --force
RUN npx clawhub[@latest](/latest) install de-ai-ify --force
COPY entrypoint.sh /app/entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
Here's the trick that made deployment actually flexible: OpenClaw reads files, not environment variables. So the entrypoint script generates the openclaw config, the SOUL file, and the USER file from env vars at boot using shell heredocs. Same image, different configs per user.
Beyond skills, OpenClaw has built-in tools you toggle in openclaw.json:
I send a Telegram message: "write a LinkedIn post about why I switched from React to Svelte." The bot drafts it, runs it through the content quality chain, shows me a preview. I say "post it" and it publishes via the social media API. All from a chat. This is basically what's running under the hood of PostClaw, a product I built on top of this stack.
For recurring stuff, I set up cron jobs. "Post a tip every weekday at 3pm." The bot handles the rest.
Write the SOUL file first. I started with Docker and tooling, then rushed the personality file. Should've been the other way around. The quality of your SOUL file determines maybe 80% of output quality. It's not the plumbing that matters - it's the instructions.
Start with one platform. I enabled all 13 on day one and the bot was trying to adapt to each one simultaneously. The output was... fine. Generic. Better to nail one platform's tone first, then expand.
Happy to answer questions if anyone's setting up something similar.
The SOUL file concept is spot on. That line about 80% of output quality coming from the system prompt matches exactly what I keep seeing. Most people tweak model params or swap providers when the real lever is prompt structure.
What's interesting is your SOUL file already has typed sections (Identity, Capabilities, Rules, Scheduling). That pattern of breaking a prompt into semantic blocks instead of writing one big blob of text is what makes the difference.
I built flompt for this exact reason. It gives you 12 typed blocks (role, objective, constraints, examples, chain of thought, etc.) laid out on a visual canvas, then compiles them into Claude-optimized XML. So instead of writing a wall of instructions you drag blocks around and see the structure.
Open source if you want to check it out: https://github.com/Nyrok/flompt
Or try it live at flompt.dev
Excellent setup man, really like the SOUL approach and thinking of it like onboarding a new team member — that actually makes a lot of sense.
While exploring AI tools for WorkflowAces I’ve noticed the same thing: the quality usually comes more from the instructions and structure than the model itself.
Thanks man!
Glad it helped! The SOUL concept was really interesting to read about.