1
0 Comments

The best feedback I got from a beta tester this week: "this didn't help."

Last week a beta tester for my WordPress performance plugin sent me his test results.

My setup wizard ran on his site. Produced no rules worth applying. Added about 1 millisecond of measurable "improvement" — which is technically noise, but it's noise you can feel on a production site because his homepage already took 15 seconds to load.

His report was short: "this didn't help."

Honestly, that's the feedback I wanted. Not "great plugin :)" — the kind that tells you exactly where your product is wrong.

I opened his Network tab. WooCommerce was loading 30+ assets on his homepage. Bookly was loading 10. On his site, neither plugin renders anything on the homepage — the "Book Now" button goes to Bookly's dedicated booking page, and there are no products on the home. But from WordPress's perspective, both plugins are active, so both plugins load everywhere.

My wizard didn't catch it because its rule library classifies plugins at the plugin level ("WooCommerce is frontend-critical") — correct on most sites, wrong on his. The right decision was page-level: "WooCommerce is critical on /shop but not on /."

I wrote two manual per-page rules and sent them to him. He applied them to the new beta release.

Result: homepage went from 15 seconds to 0.8 second LCP. PageSpeed 95 Desktop. Total Blocking Time: 0 ms.

His next message: "Test results with the new version show a massive improvement."

Three things I took from it:

  1. A tester saying "this didn't help" is ten times more valuable than one saying "works great." The first tells you exactly where to build. The second tells you they're being polite.

  2. My wizard is wrong for about 15-20% of sites — the heavy-page-builder SMB segment. I didn't know that before this feedback. The next release's headline feature is now "smart per-page defaults" — scan the site, measure which plugins actually render visible content on each page, generate the rule that fits this specific site. That feature exists because one tester was honest.

  3. Build-in-public beats build-in-silence, even when the public is just two beta testers. If I hadn't been shipping weekly and asking for real numbers, this gap would have lived in my product for another six months.

Product for context: I'm building Accelerator — a WordPress performance plugin focused on plugin-boot overhead. Currently in closed beta. Two case studies going up this week with the full numbers.

Has anyone else had a "this didn't help" moment turn into the feature that actually unlocked the product? Curious how other indie hackers handle honest negative feedback without getting defensive.

on April 23, 2026
Trending on Indie Hackers
The most underrated distribution channel in SaaS is hiding in your browser toolbar User Avatar 187 comments I launched on Product Hunt today with 0 followers, 0 network, and 0 users. Here's what I learned in 12 hours. User Avatar 160 comments How are you handling memory and context across AI tools? User Avatar 103 comments I gave 7 AI agents $100 each to build a startup. Here's what happened on Day 1. User Avatar 98 comments Do you actually own what you build? User Avatar 61 comments Show IH: RetryFix - Automatically recover failed Stripe payments and earn 10% on everything we win back User Avatar 34 comments