1
1 Comment

Why “AI in QMS” isn’t solving the real problem (yet)

I’ve been working with QMS systems in MedTech for a while, and there’s one pattern that keeps coming up.

Most systems are very good at documenting change.

But that’s not where things break.

The real problem starts after a change is approved.

A single update can affect multiple requirements, risk assessments, verification activities, and parts of the Technical File.
In most teams, figuring that out is still a manual process.

People go back through documents, cross-check links, and try to reconstruct what needs to be updated.

Not because the data isn’t there, but because the relationships between that data aren’t visible.

What’s interesting is how “AI in QMS” is usually positioned.

Dashboards, search, summaries, regulatory updates.
Useful, but mostly observational.

It helps you see what’s there.
It doesn’t help you understand what needs to change.

The shift that actually matters is when AI becomes part of the workflow itself.

Not analyzing data after the fact, but working inside the process:

evaluating events as they happen
identifying what’s impacted
helping update the Technical File
keeping traceability intact

We recently put together a short report breaking this down in more detail, especially around change impact analysis and traceability in real workflows.

If you’re working on anything in regulated environments (MedTech, FDA, ISO 13485), curious if this matches what you’re seeing:

https://qmswrapper.com/ai-qms-for-medical-devices/

Would be interesting to hear how others are approaching this.

on April 23, 2026
  1. 1

    The "observational vs workflow" distinction is the right framing. We see the same pattern outside MedTech - in every regulated or process-heavy environment, AI gets bolted on as a reporting layer rather than embedded in the actual decision chain.

    The change impact analysis problem you're describing is essentially a graph problem that most QMS tools treat as a document problem. A change to one requirement should propagate through every connected node - risk assessments, verification activities, technical file sections - automatically. Instead, someone manually opens 15 documents and hopes they didn't miss a link. AI that understands the dependency graph and flags "this change affects these 7 items" is genuinely useful. AI that summarises what you already know is not.

    We're working with an industrial client right now on an NDT quality control system - different domain but the same underlying challenge. When a test result comes back, who needs to know, what gets updated, and what's the audit trail? The workflow isn't complicated. Making sure nothing falls through the cracks is.

    Curious whether you've found that teams resist embedding AI into the workflow itself vs keeping it as an advisory layer. In our experience, regulated environments are cautious about letting AI touch the process directly - they want it to recommend, not act.

Trending on Indie Hackers
How are you handling memory and context across AI tools? User Avatar 109 comments Do you actually own what you build? User Avatar 66 comments Code is Cheap, but Scaling AI MVPs is Hard. Let’s Fix Yours. User Avatar 34 comments How to see your entire business on one page User Avatar 29 comments I Think MCP Will Punish Thin API Wrappers User Avatar 27 comments What AI Is Actually Changing in IT Certification Prep User Avatar 19 comments