You probably know exactly how you want your product to vibe — authoritative, playful, formal…
Your AI does not.
To it, there’s only a prompt and a pile of training data.
Here’s how to take the vibe in your head, turn it into concrete rules, and treat that vibe like configuration for your AI.
Most people treat “vibe” as just colors, copy, and fonts.
In AI products, vibe is product logic. It affects:
If you don’t set this, the model will invent its own vibe from training data.
Your goal: take the vibe in your head and turn it into simple rules your AI must follow, in:
We’ll do it in 5 steps.
Before anything else, write one sentence that defines your vibe.
It should quickly explain:
Template: “We are a [adjective], [adjective] [product] that helps users [goal] while staying [constraint].”
This line becomes:
Now turn that sentence into rules your AI can follow.
Use three buckets:
Always
Never
When in doubt
Do this for three areas:
In all the examples below, we use the same vibe: calm, serious, slightly warm
Always
Never
When not sure
Always
Never
When not sure
Always
Never
When not sure
Start with 5–10 rules like these that really change how the AI answers.
Later, you will copy the most important ones into your system prompts and tool prompts.
Golden examples are your reference answers. They show the vibe in real situations.
For each one, write:
Example (compliance)
Scenario: User asks for exact tax and forms.
Bad reply: Gives numbers/forms confidently.
Good reply: Says it can’t do that, explains why, and gives safe next steps (export trades, group gains, list countries) and offers help writing a summary for an accountant.
Why the good reply is right: it follows the rules and avoids fake certainty.
Use these as training data, review examples, and tests.
Your product’s “feel” is tested in the worst moments, not the best ones. Do not let the base model improvise here.
Pick a few edge cases that matter most for you. For each one, write a very short playbook:
Here’s an angry-user example:
The goal is to calm the user and move toward a clear next step.
The tone should be calm and neutral, with no jokes.
The assistant should notice and name the frustration, restate the problem, offer 1-2 options, and bring in a human when money or security is involved, or when the user specifically asks for a human.
Keep the detailed version in your Vibe Spec so people can see and adjust it. The model only needs the boiled-down version in your prompt, like:
"When the user is angry, briefly acknowledge their frustration, then focus on clear steps to solve the problem. Offer 1-2 options, escalate to a human when the issue involves money, security, repeated failures, or when the user asks, and never argue or make jokes."
Do the same for things like:
Keep each playbook to a few lines.
Now your vibe works when things go wrong, not only when things are easy.
Up to now, you defined the vibe. Now you make the agent run on it.
You inject it in four places.
1) System prompt (global behavior)
Take your vibe sentence and your most important rules.
Put them in the system prompt.
This controls how the agent sounds and makes decisions by default.
2) Tool prompts (local behavior)
For each key tool, add 1–3 short lines about tone, focus, and limits.
This keeps individual features from drifting.
3) Guardrails (edge cases)
Turn “what to do in edge cases” into rules.
Put them in safety prompts or backend rules.
This tells the agent what to do when things go wrong.
4) Tests
Run your golden examples on the agent.
If the answers don’t look like the “good” ones, change the prompts.
Now the vibe is in the system.
The "Always / Never / When in doubt" framework is really practical. I've been building tools that use LLMs under the hood and the biggest lesson was exactly this — if you don't explicitly define the personality boundaries, the model defaults to this generic helpful-assistant voice that feels like every other AI product out there.
One thing I'd add: the golden examples are worth maintaining as a regression suite, not just a one-time exercise. Every time a user reports a weird response, that's a new test case. We started logging edge cases from support tickets and turning them into golden examples. Within a few weeks we had maybe 40 of them, and they caught prompt regressions way faster than manual QA ever did.
The edge case playbooks are underrated too. Most teams I've seen only define the happy path in their prompts and then act surprised when the model gets creative in failure scenarios. Defining what happens when things go wrong is where brand trust actually gets built or destroyed.
as much as we don't like to admit it, branding can make or break your startup. great tips.