We've been working on PromptBrake — an automated scanner that runs security tests against LLM-powered API endpoints. Along the way, we ended up building a few standalone tools that might be useful even outside of it:
LLM Security Checklist Builder — a practical release checklist covering prompt injection, tool permissions, data exposure, and output controls
Prompt Injection Payload Generator — generates direct, indirect, and multi-turn injection payloads you can adapt for testing your own endpoint
OWASP LLM Test Case Mapper — translates OWASP LLM Top 10 risks into concrete test ideas with ownership guidance
All three are free and don't require an account: promptbrake.com/free-tools
We built these to give back to the community that's been sharing knowledge in this space. LLM security is still early, and a lot of teams aren't sure what they might be missing — figured it's better to make this kind of stuff accessible rather than gate it.
Curious how others here are approaching this — do you have a repeatable process before shipping LLM features, or is it still mostly ad hoc?
Providing a "Prompt Injection Payload Generator" is a massive service to the dev community, as most teams are still in the "ad-hoc" phase of LLM security. By mapping concrete test cases to OWASP Top 10 risks, you're turning vague AI anxiety into a repeatable technical workflow.
I’m currently running a project in Tokyo (Tokyo Lore) that highlights high-utility security tools and the logic behind them. Since you're building the infrastructure to protect LLM-powered APIs from injection and data exposure, entering your project could be the perfect way to get your scanners in front of more engineering teams while your odds are at their absolute peak.
Smart move making the education layer free.
In emerging markets, trust is often built faster through useful tools than through direct product pitches.