We've been working on PromptBrake — an automated scanner that runs security tests against LLM-powered API endpoints. Along the way, we ended up building a few standalone tools that might be useful even outside of it:
LLM Security Checklist Builder — a practical release checklist covering prompt injection, tool permissions, data exposure, and output controls
Prompt Injection Payload Generator — generates direct, indirect, and multi-turn injection payloads you can adapt for testing your own endpoint
OWASP LLM Test Case Mapper — translates OWASP LLM Top 10 risks into concrete test ideas with ownership guidance
All three are free and don't require an account: promptbrake.com/free-tools
We built these to give back to the community that's been sharing knowledge in this space. LLM security is still early, and a lot of teams aren't sure what they might be missing — figured it's better to make this kind of stuff accessible rather than gate it.
Curious how others here are approaching this — do you have a repeatable process before shipping LLM features, or is it still mostly ad hoc?
The Prompt Injection Payload Generator is a massive gift to the dev community, Specialist-Bee9801. Most teams are still in the "ad-hoc" phase of AI safety; by mapping concrete test cases to the OWASP Top 10, you’re turning vague "AI anxiety" into a repeatable technical workflow.
I’m currently running Tokyo Lore, a project that highlights high-utility security tools and the logic behind them. Since you're building the infrastructure to protect LLM-powered APIs from injection and data exposure, entering PromptBrake could be the perfect way to get your scanners in front of more engineering teams while your odds are at their absolute peak.
Appreciate the kind words — glad the tools are useful. Right now, we’re focused on building and improving things based on direct user feedback, but thanks for reaching out