I've spent 18 years reviewing vendor Statements of Work. The same problems show up almost every time. Vague scope. Missing change control. A knowledge transfer section that's either one paragraph or completely absent. I could predict what would be missing before I opened the document.
So I built a tool that checks for them automatically. SoWScanner scores a vendor SoW across eight delivery risk categories in about 30 seconds. Free, no sign-up: sowscanner.com
Here's what actually happened building it.
The first version let the AI handle everything, reading the document and scoring it. It didn't work. The model inflated scores because it was trying to be helpful rather than accurate. A borderline SoW would come out looking fine. I had to separate the two jobs: one model reads and extracts, a separate deterministic engine scores. Same input, same score, every time.
The second problem was knowing what to score. Eight categories sounds clean. Getting there wasn't. I kept second-guessing the weightings. Is knowledge transfer more important than acceptance criteria? Is governance its own category or part of scope? It was challenging in a way that required me to stop, think, and repeat until it felt right. The scores had to match what 18 years of instinct was already telling me. Working through that was the most useful thing I did. It forced me to make explicit what had always been implicit.
I built this in around 15 hours a week, around a full-time job and two young kids. I work in short focused bursts. A year ago I couldn't have done this at all. I've been building a ways of working framework alongside the products, a structured system for running AI-accelerated builds that gets faster with every project. SoWScanner is the first product where that system was fully in place, and the difference was obvious.
That's what I'm most proud of. Not just that the tool works. That I built the capability to build it.