7
1 Comment

Your app is wired into MCP. But ChatGPT still skips it. Here’s why.

If you’ve connected your SaaS to MCP, congrats — your product can now be used inside ChatGPT or Claude.

But here’s the thing: You’re no longer designing for humans. You’re designing for language models.

LLMs don’t care about your UI. They care about actions, structure, and clarity.

Here’s how to make your actions actually work inside AI tools.

1. Use actions that do only one thing

Each action should only do one thing.

If it tries to do more than one thing, the model can get confused.

Good action: create\_invoice

Bad action: create\_invoice\_and\_send\_email\_if\_client\_exists

Why: The model reads both the name and the description to understand what the action does.

If either one is too long or tries to explain too many things, the model gets confused or ignores it.

Too much logic leads to mistakes.

2. Make inputs easy to read

If your inputs are deep or complex, the model won’t understand them.

Use flat fields like strings, numbers, and booleans.

Good:

"inputs": {  
"name": "string", 
 "email": "string",  
"newsletter\_opt\_in": "boolean"
}

Bad:

"inputs": {  
"user": {   
 "profile": {     
 "contact": {       
 "email": "string"      
}   
 }  
}
}

Why: The model needs to quickly match fields to what the user said. Flat inputs make that easier.

3. Write clear, simple descriptions

Say what the action does in plain English. No code, no jargon.

Good: “Adds a new task to a project with a name and due date.”
Bad: “Calls POST /tasks with required params to create new task entity.”

Why: The model reads your descriptions to decide what each action does. If it’s too vague or too technical, it may skip or misuse it.

4. Include edge cases in your description (where possible)

LLMs don’t infer what’s not written. If there's a condition, you need to say it directly.

For example:

  • “Creates a form. If one with the same name exists, it will be updated.”
  • “Deletes a user. Cannot delete if user is an admin.”

Why: LLMs don’t infer edge cases. If there’s an exception or limit, the model won’t know unless you say so. Put it in the description — clearly and directly.

5. Always include output — and make it helpful

The model needs to know what your action returns. If you leave the output blank, it may think the action didn’t work.

Bad: "output": {}

Good:

"output": { 
 "task\_id": "string", 
 "url": "string",  
"message": "Task created! Here's the link: https://..."}

Tips for better output:

  • Always include a message the model can show to the user
  • Add links or IDs if the user might need them
  • Use short and simple field names

Why: ChatGPT shows your output to the user. If it’s blank or confusing, users think your tool didn’t work — even if it did.

6. Make your actions small and easy to chain

LLMs don’t do everything at once. They take things one step at a time.

So instead of building one big action that does everything, build small actions that each do one clear thing — so the model can put them together.

**Here’s an example:**A user might say: “Get form submissions tagged ‘error’ and create tasks for each one in Linear.”

The model will try to do this in steps:

  1. First, call an action like list\_submissions(tag="error")
  2. Then, for each result, call create\_task(title, description)

This only works if your actions are built to be used this way.

If they aren’t, things can break:

  • The model might send all the data to the wrong action
  • It might skip your tool and try another one
  • Or it might just stop, because it doesn’t know what to do next

And if your action doesn’t return anything useful — like an ID or a message — the model won’t know what happened, and it won’t know how to continue.

Why: The model is trying to build a little workflow out of your actions. If your actions are too big, too vague, or don’t connect well — the workflow falls apart.

7. Add soft rules to your descriptions

LLMs don’t follow code rules. They just read what you write and try to follow it. So if there’s an important rule — say it with words.

Bad:

"amount": {  
"type": "number"
}

This tells the model it needs a number, but not what kind.

Good:

"amount": {  "type": "number",  "description": "Must be between 0 and 10,000"}

Now the model knows what’s allowed.

Bad:

"date": {  "type": "string"}

This just states that it’s a string but doesn’t specify what the format should be.

Good:

"date": {  "type": "string",  "description": "Use format YYYY-MM-DD (like 2025-03-01)"}

Now the model knows exactly what the date should look like.

Why: The model doesn’t run your backend checks. If you want it to follow a rule, say it clearly in your description — just like you would to another person.

8. Give the model a reason to choose your tool

At the top of your schema, include a short description that tells the model what your tool does — and when to use it.

{  "name": "ExampleFormBuilderCo", 
 "description": "Create and manage forms. Best for surveys, feedback, and internal data collection."}

Why: This is what the model sees first when choosing tools. If it’s generic or unclear, your product gets skipped in favor of one that sounds like a better fit.

9. Test with messy prompts

Real people don’t type perfect prompts. They say things like:

“hey i need to create a feedback form with 3 questions then get the latest replies and drop them in gsheet”

Try prompts like that when you test.

If the model doesn’t choose the right actions, or does things out of order, or just gives up — that means something in your schema still isn’t clear.

10. Debug the AI loop like product QA

You’re not just building an API anymore. You’re building something the model has to use — like a real user would.

Try it out and watch what happens.

  • Does it choose the right action?
  • Does it mess up the input?
  • Do some prompts just... do nothing?

Those are signs that something in your schema isn’t clear. Fixing those problems makes your tool easier for the model to use.

To the model, your schema is the product. So treat it like one.

on November 20, 2025
  1. 1

    sometimes we overthink it, but i totally agree - if you want a model to follow a rule, just say it clearly, like you would to another person

Trending on Indie Hackers
You don't need to write the same thing again User Avatar 27 comments I built an Image-to-3D SaaS using Tencent's Hunyuan 3D AI User Avatar 25 comments I built something that helps founders turn user clicks into real change 🌱✨ User Avatar 18 comments Let’s Talk: What’s Missing in Today’s App Builders? User Avatar 17 comments 15 Years of Designmodo User Avatar 14 comments The Era of Malleable SaaS: Is This the End of "Opinionated Software"? User Avatar 12 comments