If you’ve connected your SaaS to MCP, congrats — your product can now be used inside ChatGPT or Claude.
But here’s the thing: You’re no longer designing for humans. You’re designing for language models.
LLMs don’t care about your UI. They care about actions, structure, and clarity.
Here’s how to make your actions actually work inside AI tools.
Each action should only do one thing.
If it tries to do more than one thing, the model can get confused.
Good action: create\_invoice
Bad action: create\_invoice\_and\_send\_email\_if\_client\_exists
Why: The model reads both the name and the description to understand what the action does.
If either one is too long or tries to explain too many things, the model gets confused or ignores it.
Too much logic leads to mistakes.
If your inputs are deep or complex, the model won’t understand them.
Use flat fields like strings, numbers, and booleans.
Good:
"inputs": {
"name": "string",
"email": "string",
"newsletter\_opt\_in": "boolean"
}
Bad:
"inputs": {
"user": {
"profile": {
"contact": {
"email": "string"
}
}
}
}
Why: The model needs to quickly match fields to what the user said. Flat inputs make that easier.
Say what the action does in plain English. No code, no jargon.
Good: “Adds a new task to a project with a name and due date.”
Bad: “Calls POST /tasks with required params to create new task entity.”
Why: The model reads your descriptions to decide what each action does. If it’s too vague or too technical, it may skip or misuse it.
LLMs don’t infer what’s not written. If there's a condition, you need to say it directly.
For example:
Why: LLMs don’t infer edge cases. If there’s an exception or limit, the model won’t know unless you say so. Put it in the description — clearly and directly.
The model needs to know what your action returns. If you leave the output blank, it may think the action didn’t work.
Bad: "output": {}
Good:
"output": {
"task\_id": "string",
"url": "string",
"message": "Task created! Here's the link: https://..."}
Tips for better output:
Why: ChatGPT shows your output to the user. If it’s blank or confusing, users think your tool didn’t work — even if it did.
LLMs don’t do everything at once. They take things one step at a time.
So instead of building one big action that does everything, build small actions that each do one clear thing — so the model can put them together.
**Here’s an example:**A user might say: “Get form submissions tagged ‘error’ and create tasks for each one in Linear.”
The model will try to do this in steps:
This only works if your actions are built to be used this way.
If they aren’t, things can break:
And if your action doesn’t return anything useful — like an ID or a message — the model won’t know what happened, and it won’t know how to continue.
Why: The model is trying to build a little workflow out of your actions. If your actions are too big, too vague, or don’t connect well — the workflow falls apart.
LLMs don’t follow code rules. They just read what you write and try to follow it. So if there’s an important rule — say it with words.
"amount": {
"type": "number"
}
This tells the model it needs a number, but not what kind.
"amount": { "type": "number", "description": "Must be between 0 and 10,000"}
Now the model knows what’s allowed.
"date": { "type": "string"}
This just states that it’s a string but doesn’t specify what the format should be.
"date": { "type": "string", "description": "Use format YYYY-MM-DD (like 2025-03-01)"}
Now the model knows exactly what the date should look like.
At the top of your schema, include a short description that tells the model what your tool does — and when to use it.
{ "name": "ExampleFormBuilderCo",
"description": "Create and manage forms. Best for surveys, feedback, and internal data collection."}
Why: This is what the model sees first when choosing tools. If it’s generic or unclear, your product gets skipped in favor of one that sounds like a better fit.
Real people don’t type perfect prompts. They say things like:
“hey i need to create a feedback form with 3 questions then get the latest replies and drop them in gsheet”
Try prompts like that when you test.
If the model doesn’t choose the right actions, or does things out of order, or just gives up — that means something in your schema still isn’t clear.
You’re not just building an API anymore. You’re building something the model has to use — like a real user would.
Try it out and watch what happens.
Those are signs that something in your schema isn’t clear. Fixing those problems makes your tool easier for the model to use.
To the model, your schema is the product. So treat it like one.
sometimes we overthink it, but i totally agree - if you want a model to follow a rule, just say it clearly, like you would to another person