Solo founder, been shipping Claude tooling for 3 months. Got tired of
Reddit posts claiming "this one prompt changes everything" without any
data.
Ran controlled A/B tests on 120 shared codes. 47% are placebo.
Free library: clskillshub.com/prompts
Full data + cheat sheet ($10-25): clskillshub.com/cheat-sheet
Happy to share raw test data on any specific code in the comments.
Built this because I think prompt "secrets" need the same rigor we
apply to any other tool. Open to feedback.
This is solid — especially calling out placebo.
But right now it still reads like:
→ “better prompts”
Which puts you in the same bucket you’re arguing against.
The real angle here is:
→ “prompt performance, measured — not guessed”
That’s a different category.
If people don’t instantly see:
“this helps me avoid wasting time on fake wins”
they’ll treat it like another prompt library.
Also — if you push this into a real product, the name/brand will need to carry that credibility.
Happy to share a few directions if you go that route.
The 47% placebo rate is a useful number to lead with because it reframes the conversation from 'which prompts work' to 'most shared prompts are useless and here is how I proved it.' The controlled A/B methodology is the actual differentiator here, not the cheat sheet itself. Curious what the distribution looked like across the 120 tests. Did the 53% that showed real effect cluster around specific prompt categories, or was it spread across types?