12
17 Comments

Your app is wired into MCP. But ChatGPT still skips it. Here’s why.

If you’ve connected your SaaS to MCP, congrats — your product can now be used inside ChatGPT or Claude.

But here’s the thing: You’re no longer designing for humans. You’re designing for language models.

LLMs don’t care about your UI. They care about actions, structure, and clarity.

Here’s how to make your actions actually work inside AI tools.

1. Use actions that do only one thing

Each action should only do one thing.

If it tries to do more than one thing, the model can get confused.

Good action: create\_invoice

Bad action: create\_invoice\_and\_send\_email\_if\_client\_exists

Why: The model reads both the name and the description to understand what the action does.

If either one is too long or tries to explain too many things, the model gets confused or ignores it.

Too much logic leads to mistakes.

2. Make inputs easy to read

If your inputs are deep or complex, the model won’t understand them.

Use flat fields like strings, numbers, and booleans.

Good:

"inputs": {  
"name": "string", 
 "email": "string",  
"newsletter\_opt\_in": "boolean"
}

Bad:

"inputs": {  
"user": {   
 "profile": {     
 "contact": {       
 "email": "string"      
}   
 }  
}
}

Why: The model needs to quickly match fields to what the user said. Flat inputs make that easier.

3. Write clear, simple descriptions

Say what the action does in plain English. No code, no jargon.

Good: “Adds a new task to a project with a name and due date.”
Bad: “Calls POST /tasks with required params to create new task entity.”

Why: The model reads your descriptions to decide what each action does. If it’s too vague or too technical, it may skip or misuse it.

4. Include edge cases in your description (where possible)

LLMs don’t infer what’s not written. If there's a condition, you need to say it directly.

For example:

  • “Creates a form. If one with the same name exists, it will be updated.”
  • “Deletes a user. Cannot delete if user is an admin.”

Why: LLMs don’t infer edge cases. If there’s an exception or limit, the model won’t know unless you say so. Put it in the description — clearly and directly.

5. Always include output — and make it helpful

The model needs to know what your action returns. If you leave the output blank, it may think the action didn’t work.

Bad: "output": {}

Good:

"output": { 
 "task\_id": "string", 
 "url": "string",  
"message": "Task created! Here's the link: https://..."}

Tips for better output:

  • Always include a message the model can show to the user
  • Add links or IDs if the user might need them
  • Use short and simple field names

Why: ChatGPT shows your output to the user. If it’s blank or confusing, users think your tool didn’t work — even if it did.

6. Make your actions small and easy to chain

LLMs don’t do everything at once. They take things one step at a time.

So instead of building one big action that does everything, build small actions that each do one clear thing — so the model can put them together.

**Here’s an example:**A user might say: “Get form submissions tagged ‘error’ and create tasks for each one in Linear.”

The model will try to do this in steps:

  1. First, call an action like list\_submissions(tag="error")
  2. Then, for each result, call create\_task(title, description)

This only works if your actions are built to be used this way.

If they aren’t, things can break:

  • The model might send all the data to the wrong action
  • It might skip your tool and try another one
  • Or it might just stop, because it doesn’t know what to do next

And if your action doesn’t return anything useful — like an ID or a message — the model won’t know what happened, and it won’t know how to continue.

Why: The model is trying to build a little workflow out of your actions. If your actions are too big, too vague, or don’t connect well — the workflow falls apart.

7. Add soft rules to your descriptions

LLMs don’t follow code rules. They just read what you write and try to follow it. So if there’s an important rule — say it with words.

Bad:

"amount": {  
"type": "number"
}

This tells the model it needs a number, but not what kind.

Good:

"amount": {  "type": "number",  "description": "Must be between 0 and 10,000"}

Now the model knows what’s allowed.

Bad:

"date": {  "type": "string"}

This just states that it’s a string but doesn’t specify what the format should be.

Good:

"date": {  "type": "string",  "description": "Use format YYYY-MM-DD (like 2025-03-01)"}

Now the model knows exactly what the date should look like.

Why: The model doesn’t run your backend checks. If you want it to follow a rule, say it clearly in your description — just like you would to another person.

8. Give the model a reason to choose your tool

At the top of your schema, include a short description that tells the model what your tool does — and when to use it.

{  "name": "ExampleFormBuilderCo", 
 "description": "Create and manage forms. Best for surveys, feedback, and internal data collection."}

Why: This is what the model sees first when choosing tools. If it’s generic or unclear, your product gets skipped in favor of one that sounds like a better fit.

9. Test with messy prompts

Real people don’t type perfect prompts. They say things like:

“hey i need to create a feedback form with 3 questions then get the latest replies and drop them in gsheet”

Try prompts like that when you test.

If the model doesn’t choose the right actions, or does things out of order, or just gives up — that means something in your schema still isn’t clear.

10. Debug the AI loop like product QA

You’re not just building an API anymore. You’re building something the model has to use — like a real user would.

Try it out and watch what happens.

  • Does it choose the right action?
  • Does it mess up the input?
  • Do some prompts just... do nothing?

Those are signs that something in your schema isn’t clear. Fixing those problems makes your tool easier for the model to use.

To the model, your schema is the product. So treat it like one.

on November 20, 2025
  1. 1

    I noticed you're experiencing challenges with ChatGPT not integrating with your app. It can be frustrating when tools don't work together seamlessly. AI Navigator could provide insights on optimizing your software tools, and I invite you to try our free scanner to see how we can help.

  2. 1

    Nice info, thanks for sharing. Makes a lot of sense.

  3. 1

    This is one of the few posts that actually treats MCP integration as a UX design problem for LLMs instead of just an API wiring exercise. The emphasis on single-purpose actions, flat inputs, and explicit edge cases matches what I’ve been seeing in my own experiments — if the model can’t skim the schema and instantly map it to the user’s request, it just quietly ignores you.

    I especially like the idea that the model is trying to build a little workflow out of micro-actions, and that your job is to make those actions chainable with helpful outputs (IDs, URLs, short messages) instead of black boxes. That “schema is the product” mindset is a great forcing function; I’m going to steal that line for my own team’s reviews.

    Have you seen any patterns or metrics that correlate with higher tool-selection rates (e.g., number of actions, average input depth, description length), or is it still mostly qualitative testing with messy prompts?

  4. 1

    "Your App Is Wired into MCP, But ChatGPT Still Skips It. Here’s Why." offers a clear analysis of integration challenges between apps and AI platforms. It explains why connections don’t always guarantee visibility or functionality, highlighting technical limitations, prioritization issues, and system constraints. Insightful and practical, the piece helps developers understand and navigate AI interoperability obstacles effectively.

  5. 1

    MCP integration struggles are real. The discoverability problem you highlight is crucial - having the best tool means nothing if the LLM doesn't know when to use it. Curious what patterns you found that improve tool selection rates.

  6. 1

    The main takeaway is: when integrating with MCP or any LLM environment, design for the model, not humans. Keep actions single-purpose, inputs flat and simple, outputs clear, and edge cases explicit. Small, chainable actions with clear descriptions and soft rules make workflows reliable. Testing with real, messy prompts ensures your tool behaves as intended. Treat your schema as the product—clarity and predictability are key for LLMs to actually use your app.

  7. 1

    Great insight, thanks for sharing

  8. 1

    Interesting explanation! Even with MCP integration, ChatGPT may skip apps due to context prioritization, relevance, or compatibility. Understanding these limitations helps developers optimize their integrations and ensures their apps are more likely to be utilized effectively.

  9. 1

    The key takeaway is: design your actions for AI, not humans. Keep them single-purpose, simple, and clear with flat inputs, explicit edge cases, and helpful outputs. Small, chainable actions make workflows work smoothly, while clear descriptions and soft rules help the model understand exactly what to do.

  10. 1

    Interesting explanation! Even if your app is integrated with MCP, ChatGPT may skip it due to prioritization, context limits, or compatibility issues. Understanding these factors helps developers optimize integrations and ensure their apps function effectively within AI environments.

  11. 1

    Even when your app is integrated with MCP, ChatGPT may skip it due to prompt-routing, plugin prioritization, or internal filters. Integration alone doesn’t guarantee recognition. Fixing this requires aligning API prompts, ensuring correct plugin precedence, or updating ChatGPT’s handling of external apps to make your integration consistently accessible.

  12. 1

    That's a sharp observation. Even when your app is technically integrated with MCP, ChatGPT might ignore it because of prompt-routing issues or internal filters preventing model awareness. Fixing that requires aligning API prompts, enforcing correct plugin precedence, or updating how ChatGPT internally prioritizes your integration — not just wiring it in.

  13. 1

    This is one of the clearest explainers I’ve seen on why MCP actions fail silently — it really is a UX problem, just for LLMs instead of humans. The “design for the model, not the user” mindset shift is huge and underrated. Going to audit my own schema now, especially the action granularity and output fields.

  14. 1

    It’s frustrating that although the app is integrated into MCP, ChatGPT still ignores it. Possibly the model’s prompt‑handling or internal filters block recognition. This underscores real integration limits — even technically solid connections don’t ensure usage. Developers must improve API support or enforce tighter alignment to unlock full potential and performance.

  15. 1

    This breakdown is extremely on-point, especially for teams treating MCP integration like a traditional API problem. Once your tool sits inside an LLM environment, you’re not just building endpoints anymore — you’re shaping how a reasoning engine understands and executes your product.
    What you said about “designing for models, not humans” is the real shift. LLMs don’t infer intent the way users do; they follow whatever structure you give them. When schemas are unclear, too nested, or overloaded with logic, the model doesn’t struggle — it simply skips your tool.
    The emphasis on single-purpose actions and flat inputs is underrated but critical. The model’s ability to chain micro-actions is exactly what makes these environments powerful, and it can only do that if each action is predictable and unambiguous.
    I also appreciate the reminder that outputs matter. Developers often leave them empty, but for LLMs, the output is the only feedback loop. If the model can’t “see” what happened, it can’t continue the workflow — something that feels obvious only after watching it fail.
    Testing with messy prompts is another gem. Real users don’t speak in perfect JSON schemas, so the model needs guardrails that explain edge cases, formats, limits, and intent in plain language.
    Overall, this is one of the clearest explanations I’ve seen for why tools get ignored and how to fix it. The mindset shift alone — treating the schema as the product — is going to save a lot of headaches for teams integrating with MCP.

  16. 1

    Great call, this is easy to overlook. Thank ya.

  17. 1

    sometimes we overthink it, but i totally agree - if you want a model to follow a rule, just say it clearly, like you would to another person

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 142 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments The indie maker's dilemma: 2 months in, 700 downloads, and I'm stuck User Avatar 39 comments A simple way to keep AI automations from making bad decisions Avatar for Aytekin Tank 32 comments I spent weeks building a food decision tool instead of something useful User Avatar 28 comments