Hey Indie Hackers,
I’m an infosec guy who spotted a scary problem: employees pasting sensitive data (like PII, passwords, or API keys) into ChatGPT and other GenAI tools, with no DLP (data loss prevention) to catch it. As AI usage explodes, this felt like a compliance nightmare waiting to happen.
So, I built PromptShield, a browser extension to block sensitive data before it hits AI platforms. The backend runs on a Python Flask API, using regex and DLP APIs to scan for 150+ data types (credit cards, SSNs, etc.). The extension hooks into the browser’s DOM, checks inputs against the API, and either blocks, warns, or lets them through based on customizable settings. It’s lightweight and runs locally to keep data private.
After months of tweaking and pitching, I landed my first enterprise customer yesterday—a huge win! But it was a slog, and I’m figuring out what’s next to scale.
A Few Questions for You:
Has anyone dealt with GenAI data leaks in their work? Is this a growing pain point?
How did you go from 1 to 10 customers? My first took forever—any tips to speed up traction?
Pricing for enterprise SaaS is tricky. What’s worked for you in security or B2B?
I’d love to hear your stories, feedback, or brutal roasts. Check out PromptShield at promptshield.cloud if you’re curious—early days, so all input helps! Happy to share more about the build or customer grind if anyone wants to dig in.