1
0 Comments

How we structured our FAQ to make AI support actually work (the non-obvious parts)

When people think about AI for customer support, they focus on the AI.

The thing that actually determines whether it works is the FAQ.

Garbage in, garbage out. But the "garbage" in support FAQs is more subtle than it sounds.

Here's what we learned about structuring a support FAQ that AI can actually use reliably:

Problem 1: Answers written for humans, not for retrieval

Human-written FAQ answers are often full of implied context.

"To reset your password, go to settings."

A human reader knows which settings page, roughly where to look, what the reset form looks like.

AI using this to generate a reply will produce something technically accurate but often incomplete.

Fix: Write every answer as if the reader has never seen the product before. Full steps. No implied knowledge.

Problem 2: One question, multiple valid answers

"How do I cancel?" might have 3 different correct answers depending on:

  • Whether they're on monthly or annual billing
  • Whether they have pending charges
  • Whether they're within a refund window

A single FAQ entry for "How do I cancel?" will always be partially wrong for someone.

Fix: Split into separate entries with clear conditions. "How do I cancel a monthly plan?", "How do I cancel an annual plan?", "Am I eligible for a refund if I cancel?"

Problem 3: Outdated answers that nobody updated

FAQ answers go stale. Product changes. Policies change. UI changes.

An AI using an outdated FAQ answer is worse than no AI — it confidently gives wrong information.

Fix: Every FAQ entry needs an owner and a review date. If it hasn't been reviewed in 90 days, it gets flagged automatically.

Problem 4: Answers that commit to something

"We'll respond within 24 hours."
"Refunds are always processed in 3-5 days."

If these are in your FAQ and the AI uses them, the AI is making commitments on your behalf — commitments you may not always be able to keep.

Fix: Remove any SLA promises or time-bound commitments from FAQ content the AI can access. Those conversations should always go to a human.

The FAQ structure is 80% of the work. Most people spend 80% of their time on the AI selection. That's backwards.

Anyone else been through the process of structuring a support knowledge base for AI use? What problems did you run into?

on April 15, 2026
Trending on Indie Hackers
I built a tool that shows what a contract could cost you before signing User Avatar 111 comments The coordination tax: six years watching a one-day feature take four months User Avatar 73 comments My users are making my product better without knowing it. Here's how I designed that. User Avatar 63 comments A simple LinkedIn prospecting trick that improved our lead quality User Avatar 50 comments I changed AIagent2 from dashboard-first to chat-first. Does this feel clearer? User Avatar 39 comments Why I built a SaaS for online front-end projects that need more than a playground User Avatar 15 comments