2
5 Comments

DOJ Title II hits in 12 months. Cities quote $50K per audit. My scanner runs from $19/mo.

I spent the last six weeks building an accessibility scanner. Not because I'm an accessibility expert — I'm not — but because I went down a rabbit hole reading procurement RFPs, FTC consent orders, and ADA case law, and the more I read the more it looked like a category that was completely broken.

Here's what I found, what I built, and what I'd do differently if I had to start over.

The setup

Two things hit the accessibility-software market in 2025 that nobody outside the industry seems to have noticed.

One: the FTC fined accessiBe $1 million dollars in April 2025 for deceptive marketing. Their AI-overlay product had been sold for years on the promise that it could make any website "fully WCAG compliant" with one line of JavaScript. The consent order says, in plain English, that this was not true. The overlay applied cosmetic patches; it did not fix the underlying source code; the websites buying it were still sued under the ADA. (FTC v. accessiBe Ltd., 2025 consent order — public.)

Two: the U.S. Department of Justice finalised the Title II web-accessibility rule. Public entities (cities, counties, school districts, state agencies) with 50,000 or more residents must have their websites and mobile apps conformant with WCAG 2.1 AA by April 26, 2027. Smaller entities have until April 26, 2028. There are around 1,400 entities in the larger tier and 38,000 in the smaller. (28 CFR Part 35 — public.)

So you have:

  • A category leader that just got fined for selling a product that didn't work,
  • A federal deadline that creates demand for tens of thousands of buyers, and
  • A meaningful private-litigation tail — research from Seyfarth Shaw shows around 22.6% of ADA Title III suits in the first half of 2025 named defendants who had an overlay product installed at the time of filing. The overlay didn't prevent the case; in some pleadings it was evidence of the deceptive-claim count.

If you're a city manager with a $5M general budget and a Title II deadline coming up, your options today are roughly: pay a consultancy $30-80K for a manual audit and a remediation plan, install an overlay and hope (no), or build it yourself.

I figured there had to be a fourth option.

What I built

AccessiScan crawls a public website, runs WCAG 2.1 AA conformance checks on every page it finds, and returns a per-criterion report. That part is table-stakes — the open-source axe-core library can do most of it. The interesting part is what happens after the scan.

For every violation it can repair, it generates a pull request against your repo with the actual fix code. Missing alt text gets a contextual alt= attribute. Empty buttons get an aria-label. Form fields without labels get <label> wrappers. Low-contrast text gets a token swap suggestion. The fix lands in a branch, with the source citation linked in the PR description, and your engineers review and merge.

It also exports a VPAT 2.5 mapped to WCAG 2.1 A and AA — which is the document procurement teams actually need when they're filling out a Title II compliance attestation or an RFP response. And it ships a GitHub Action that runs the same scan on every pull request, so you don't regress.

Pricing starts at $19/month for one site. The Agency tier with the GitHub Action template, white-label VPAT exports, and unlimited domains is $49/month. The Business tier adds Auto-Fix PRs against your repo, continuous monitoring, and the EU EAA + Section 508 pack — that's $299/month, the SLA-backed plan procurement teams ask about. Public entities and procurement teams get a 20% discount on the annual plan. The free tier runs a real scan against any URL with no signup; that's the one I'd link from a procurement RFP.

What I learned that changed the build

I'll save you the obvious lesson (overlays don't work) and skip to the three findings that actually surprised me.

1. The buyer is not the user. The person paying for accessibility software is almost never the person who has to use the report. The procurement officer wants a VPAT and a renewal contract. The engineer wants source-code diffs in a language their codebase already speaks (Tailwind classes, not generic CSS). The accessibility consultant wants a WCAG-criterion-level breakdown for the audit letter. I redesigned the report three times before I realised I needed three different report views, not one.

2. "Compliant" is not a binary, and customers know it. Every accessibility tool I evaluated marketed itself as a route to "100% compliance." The actual conversation procurement officers have with their counsel is about demonstrating good-faith effort and reducing legal exposure. That's a different product. AccessiScan reports a conformance score with explicit caveats about manual-test gaps (focus management, screen-reader narration on dynamic content) — and the procurement officers I talked to said that was the first scanner they could actually quote in a Title II attestation without their counsel rewriting it.

3. The auto-fix PR is the moat. Nine months ago I would have told you the moat was the scan engine. It isn't — the scan is solved. The moat is the part that takes a 2.4.4 Link Purpose violation, reads the surrounding JSX, generates an aria-label that fits the link's destination, and ships it as a clean diff. That requires reading code, not just running rules. It's the only piece I'd have a hard time rebuilding from scratch.

Where this goes wrong

I'm sharing this on the way up, not from a finished line. There are three things I haven't solved and they're worth flagging.

The hardest WCAG criteria are still manual. Anything involving keyboard focus, screen-reader narration, or dynamic-state announcement requires running the page in an assistive tech and listening. We catch the structural violations; we cannot catch a screen reader saying "button" when it should say "submit my application." For now we ship a checklist and a aria-live linter. Eventually I want to ship a manual-audit-as-a-service tier that pairs a human reviewer with the automated scan — but I haven't built it yet.

Overlays still convert. I've watched at least two municipal websites buy an accessiBe-style overlay after reading our comparison page. Lower trust in deep-tech procurement than I expected. Educational selling is necessary, not optional, and a free tool that just works on their actual URL is the only marketing that compounds.

The DOJ deadline is a forcing function but not a moat. Once 1,400 large public entities buy something, the second wave of 38,000 smaller entities is going to compress prices toward $9/mo per site. Whoever is best at distribution at the per-state level wins. I haven't started that conversation with state procurement co-ops yet.

Try it

If you've got a website you'd like a Title II conformance read on, the free scanner is here:

accessiscan.piposlab.com/free/wcag-scanner

Drop a URL, get a report in 60 seconds, no signup. If the report is useful and you want the auto-fix PRs and the VPAT, that's where the paid tier kicks in.

I'd genuinely love feedback from anyone in this thread who's been on either side of an accessibility procurement — buyer or vendor. The biggest open question I have is how the smaller-entity wave will source this; if you've worked with state-level procurement co-ops or municipal-association group buys, I'd love to hear what worked.

Six weeks of work, $19/mo, public deadline. The rest of the moat is execution.

— Alex (Pipo Labs)

posted to Icon for group Marketing
Marketing
on May 8, 2026
  1. 1

    This is one of the clearest examples I’ve seen recently of “regulation-driven software demand” being spotted early instead of reacted to late.

    Most founders chase trends.
    This founder chased enforcement mechanics.

    That’s a very different lens

  2. 1

    The "auto-fix PR is the moat" point really resonated. I'm a solo dev shipping a tiny iOS memo app (a Captio replacement) and even at my size the lint-vs-context-aware-fix gap is real — Xcode's accessibility audit will flag a missing label, but generating one that actually fits the user-visible verb means reading the SwiftUI view body and the binding around it, which is a totally different problem. Your three-report-views finding (procurement vs. engineer vs. consultant) is the kind of thing you only get from dragging it through real customers, not from spec-reading. Curious: do you see the auto-fix PR generator staying web-stack-specific (JSX/Tailwind) on purpose, or would extending to native Swift/Kotlin compound the moat or dilute it? Title II RFPs cover native mobile too.

  3. 1

    The product direction is much stronger than the current branding frame.

    You’re selling into procurement, compliance, legal exposure, public-sector trust, and engineering remediation workflows simultaneously. That’s not a “lab” category anymore — it starts behaving more like infrastructure once cities and agencies are involved.

    The interesting part isn’t the scan itself. It’s the transition from “accessibility checker” → “continuous remediation infrastructure.”

    That’s the layer that feels durable.

    A tighter company-grade name would probably help more than expected once procurement conversations start compounding.

    Exirra.com especially feels closer to the level of trust + infrastructure positioning this is moving toward than “PiposLabs” or “AccessiScan.”

    1. 1

      Solid observation on the positioning frame — that's exactly the gap I've been wrestling with. Pipo Labs is the holding entity (a Wyoming LLC that owns 16 SaaS products). AccessiScan is one of them, marketed standalone. So "Pipo Labs" never appears in the procurement officer's view of the product — they buy AccessiScan, see AccessiScan in their VPAT, AccessiScan in their Stripe receipt.

      You're right that once procurement conversations compound, the parent brand starts to matter (renewals, multi-product upsells, RFP language). I'm holding off on a parent rebrand until at least 3 products are profitable — too early to know which framing actually resonates with the buyer.

      For the launch this week, AccessiScan is the standalone front door. The "lab" framing is intentionally low-stakes because the products are still proving themselves.

  4. 1

    Quick one for the thread, since I know the auto-fix angle is the part most readers will be skeptical about.

    I just ran AccessiScan against news.ycombinator.com (the site we're literally on right now) — picked it because it's a famous, decade-old site that everyone here recognizes. Result: 75/100, 3 critical/serious WCAG violations:

    — 2× Images without alt text (WCAG 1.1.1) — those tiny y18.svg upvote arrows and the HN logo
    — 1× Form input without label (WCAG 1.3.1 + 4.1.2) — the search box in the header
    — No <h1> heading on page (WCAG 2.4.6)

    Each one comes with the source-code line that triggered it. The auto-fix pipeline reads the surrounding 5 files of context before drafting an aria-label / alt that fits the page's existing HTML conventions — so a Tailwind site gets a Tailwind utility, a Vue file gets a directive, etc.

    Try it for free here, no signup: accessiscan.piposlab.com/free/wcag-scanner

Trending on Indie Hackers
Agencies charge $5,000 for a 60-second product demo video. I make mine for $0. Here's the exact workflow. User Avatar 129 comments I've been building for months and made $0. Here's the honest psychological reason — and it's not what I expected. User Avatar 71 comments I wasted 6 months building a failed startup. Built TrendyRevenue to validate ideas in 10 seconds. User Avatar 57 comments Your files aren’t messy. They’re just stuck in the wrong system. User Avatar 29 comments Why Direction Matters More Than Motivation in Exam Preparation User Avatar 14 comments This system tells you what’s working in your startup — every week User Avatar 13 comments