I use AI chats (ChatGPT / Claude / Gemini) a lot for coding and debugging.
Usually that means pasting logs, configs, stack traces, or random chunks of text.
A couple of weeks ago I almost pasted a real API key into ChatGPT by accident while sharing logs.
I caught it just before sending, but it made me realize how easy it is to leak sensitive stuff when you're moving fast with AI tools.
Most of us manually delete things like API keys or emails before pasting logs, but it's surprisingly easy to miss something.
So I built a small Chrome extension called PasteSafe.
The idea is simple:
when you paste something into an AI chat, it scans the text locally and detects things like:
• API keys
• emails
• phone numbers
• IDs / UUIDs
• URLs
• amounts
If something sensitive is detected it can either:
• ask before inserting
or
• automatically mask the values
Example:
API key: sk-1234567890abcdef
→
API key: [API_KEY#1]
Everything runs locally in the browser. No servers and no data collection.
I'm curious how others deal with this.
Do you manually clean logs before pasting them into AI chats, or do you just trust yourself not to miss anything?
One thing that surprised me while building this is how often secrets appear in logs or stack traces without people noticing.
Things like API keys, internal URLs, even emails.
Curious if others ran into similar situations when using AI tools for debugging.
If anyone wants to try it, here's the Chrome Web Store page:
https://chromewebstore.google.com/detail/pastesafe-—-ai-paste-sani/gpoiombmmaegnfijmcelgbkfbkelgdih
Would love feedback from people who use AI chats a lot.