Oliver Friedrichs just built his fourth security company. This time, he's focusing on a big gap: AI security.
The market is only now starting to understand the problem that he's solving, and Pangea is already nearing 7-figure ARR.
Here's Oliver on how he's doing it. 👇
I’ve been in cybersecurity since the late '90s, starting out as an ethical hacker. Since then, I’ve founded four security companies, each tackling the next big wave — from network security to cloud security, and now AI.
We launched Pangea in 2021 to make it easier for developers to build secure cloud applications. At the time, we saw the developer population exploding — 27 million and counting — and realized there wasn’t a security platform designed for them.
Since then, we’ve built nearly two dozen developer-focused security services for applications, ranging from file scanning to authorization.
But then, AI adoption started exploding.
Companies are building hundreds of AI applications across their organizations, and this rush creates massive security risks around data leakage, prompt injection, and sensitive information disclosure. Traditional security tools aren't equipped for these new threats.
I draw a direct parallel to my experience in the anti-malware space. When viruses and malware first came out in the 80s and 90s, they were a new threat. We had to create new detection technology to detect that threat because it didn't exist before. This new threat is similar — only, words are weapons now instead of the bytes that we used to detect with antivirus products.
AI security is the new frontier, so we pivoted to focus specifically on helping teams build and secure AI applications and agents from the ground up.
Today, Pangea offers a unified AI security platform that allows both developers and security teams to secure GenAI, with security products that cover homegrown AI workloads and employee adoption of AI technology.
We expect to cross 7 figures in ARR this year.
The core innovation was recognizing that we could use generative AI to detect AI threats. We fine-tune much smaller models — around 300 million parameter models — to detect multilingual prompt injection threats across almost 100 languages.
Having spent years at companies like Symantec, Cisco, and McAfee running research teams to detect malware, where we claimed 99.9+% efficacy — we can now claim the same level of effectiveness for prompt injection detection. The challenge is that there's no real third-party testing body yet, so we built our AI guardrails with four main categories of protection:
Prompt injection - the #1 threat in the OWASP top 10 for large language model threats.
Confidential information and PII detection - we detect over 50 types of confidential information, with options to block, report, or encrypt using format-preserving encryption.
Malicious content filtering - partnering with companies like CrowdStrike to integrate threat intelligence.
Topic alignment - ensuring content aligns with organizational intent.
These protections power our newest product design for security teams, AI Detection & Response (AIDR), which deploys across a range of sensor form factors such as apps, agents, endpoints, and more, and feeds into a unified policy and detection engine to help enterprises gain visibility, threat detection, and control over the use of AI in their environment.
We built this platform to be internet-scale ready, as well as incredibly developer-friendly. Think Stripe and Twilio-like experience, where anyone can sign up, get an auth token, and start using our API immediately, embedding guardrails into their AI applications. There are several integration options:
API-first with SDKs in Go, Python, JavaScript, Java, and C#
AI gateway integrations for Kong, LiteLLM, Portkey, and others
MCP proxy to scrub and secure model-generated content
Browser extension to monitor shadow AI usage
Deployment flexibility run locally with Kubernetes/Helm or use our hosted SaaS
The platform also supports protection for AI gateways such as Kong and LiteLLM, integrates the Pangea SDK for application-level AI security, and ingests OpenTelemetry log formats to enhance observability across cloud-native environments.
We also support integration with common SIEMs like CrowdStrike, NextGen SIEM, and Splunk.Â
The goal is to fit into existing infrastructure.
One big challenge is just how fast the landscape is changing. Every model comes with different guardrails, and new ones are constantly emerging.
Claude, for example, is currently one of the hardest to bypass. Others? Not so much. You can’t assume the model itself will keep you safe.
And there are always new threats to discover. For example, one of our AI security researchers at Pangea Labs discovered an exploit recently called LegalPwn, that affects many common models and bypasses their malicious content filters by mimicking legal notices triggered.
Generally speaking, organizations are facing two major categories of AI security risk:
Workforce AI risk: Like shadow AI, where employees are using unvetted third-party tools and unknowingly feeding them confidential data.
Homegrown AI workload risk: Building proprietary software and systems on top of AI, such as customer-facing chatbots, creates new risks like prompt injection attacks.
We've taken a multi-pronged approach to growth:
Developer community engagement - Making our API incredibly easy to use and integrate.
Security team focus - Targeting CISOs and product security teams.
Educational content - We've classified 169 prompt injection methods and released resources for the community.
Open source contributions - Our Prompt Lab testing tool helps establish credibility.
We've secured strategic partnerships with CrowdStrike and backing from Google Ventures and Okta Ventures, validating our comprehensive approach to AI security.
We also launched The Great AI Escape in early 2025, a virtual escape room challenge that showcased real-world prompt injection threats in an interactive format. Over $10K in prizes and three increasingly tough rooms helped drive awareness in a fun, developer-friendly way.
The biggest hurdle has been education. Many teams are racing to build with AI, prioritizing functionality — only to realize later that security is a critical gap.
If I had to do it again, I'd go after security teams first. Once security teams see the risks and value, they bring developers into the conversation.
Build with scale in mind from day one. Even if you're starting small, architect your system to handle enterprise-scale traffic.
Focus on developer experience. Whether you're selling to developers or not, making your product easy to integrate is crucial.
Know your market’s maturity. AI security is only top of mind once things move to production.
Lead with education. Sharing what you know builds trust.
Most importantly, security can’t be an afterthought with AI. The speed, scale, and autonomy of this tech mean the risks can multiply faster than anything we’ve seen before. Guardrails need to be built in from the start.
With GenAI, we're witnessing the fastest software adoption curve in history — but also the fastest-growing security blind spot.Â
That’s why we just launched Pangea AIDR alongside our guardrails for AI applications, and our research arm, Pangea Labs, is also ramping up — driving new protections for emerging threats like image injection attacks on LLMs, and offering AI red teaming services.
GenAI is being embedded everywhere, and it’s only a matter of time before the security incidents start making headlines. We’re building the infrastructure now to prevent that.
You can visit pangea.cloud to learn more about our platform and sign up to try our APIs. We also have a ton of resources, such as our prompt injection taxonomy that we update on a regular basis, and our in-house research provided by Pangea Labs. Or connect with me on LinkedIn.
Leave a Comment
You should knoe security issues first then build tools.
Pangea’s journey—leveraging AI to secure AI—is both visionary and pragmatic. Recognizing security gaps early, building developer-first tools, and nearing seven-figure ARR sets a compelling standard in the AI security space.
That’s truly inspiring! Building four security companies across different eras of technology shows real vision and adaptability. Cybersecurity keeps evolving, and it’s leaders like you who set the path for others to follow. Our upcoming web focus is on creating secure, modern websites and digital solutions, and it’s motivating to see how innovation in security continues to shape the future of tech.
such a good use case, I love it
In this video, James Fleischmann discusses how to bridge AI security gaps while scaling operations efficiently, aiming to achieve almost a seven-figure annual revenue through strategic solutions, robust infrastructure, and market-driven growth initiatives.
Oliver Friedrichs' shift from traditional cybersecurity to AI security with Pangea is impressive. His approach to using AI to detect AI threats, like prompt injections, is innovative. Achieving near 7-figure ARR in such a niche market showcases the potential for addressing emerging security challenges. This underscores the importance of adaptability and foresight in the ever-evolving tech landscape.
Really insightful journey. I think what stands out is how you’ve combined years of security expertise with today’s AI challenges. It’s a reminder that whether in security or sales, the tools you use make the difference. For example, in my space I use LeadsGorilla,it’s AI-powered and helps me find verified leads and client insights within seconds, freeing up time to focus on strategy instead of manual work. The principle feels the same: use the right AI guardrails to work smarter, not harder.
The protection of data, systems, and workflows through AI security bridges critical gaps. This space is experiencing rapid growth, with companies approaching a 7-figure annual revenue milestone due to strong demand and innovative solutions.
Fourth rodeo and still spotting the gap...love it..
Impressive work! It’s clear Oliver identified a huge gap in AI security and leveraged deep experience to build a solution that scales and educates the market. Exciting to see such innovation in a rapidly evolving space.
This is spot on. The developer experience and easy integration via APIs are key for adoption. At Modsen, when we build AI applications for our clients, choosing the right security foundation is now one of the first discussions we have. Platforms like Pangea that understand the DevEx are crucial. Building without these guardrails is simply not an option anymore.
Oliver, this is visionary work. From ethical hacking to building Pangea, your pivot to AI security is exactly what this moment demands. I’m building White Waters Sentinel, a civic tech platform for pipeline security in Nigeria. Bootstrapped, live, and mission-driven. Your approach to using AI to detect AI threats—and building developer-first guardrails—is inspiring. I’m exploring aligned partnerships with builders tackling real-world risk.
Makes sense .... focusing on AI-specific threats and being developer-friendly really sets Pangea apart. Timing is perfect too, with AI adoption growing so fast and existing tools not fully addressing these risks
Really impressive journey! Pivoting from network and cloud security to AI security makes so much sense — guardrails for GenAI are definitely the need of the hour.
This is a great intuition to work with.
“Use AI to defend AI” is the right mental model, and it shows in how Pangea’s guardrails + AIDR cover prompt injection, PII/confidential data, malicious content, and alignment while meeting teams where they are (SDKs, gateways like Kong/LiteLLM, even an MCP proxy and browser extension).
The multilingual small-model approach and the near-7-figure ARR momentum make the bet feel timely without being hypey.
Two quick Qs:
Of your integration paths (SDK vs. gateway vs. MCP vs. extension), which one correlates with the fastest time-to-value and paid conversion?
You mention ~99.9% efficacy on prompt-injection detection but note the lack of third-party testing; how are you operationalizing evals today, and would you back an open benchmark alongside your taxonomy/“Great AI Escape” education push?
P.S. I’m with Buzz, we build conversion-focused Webflow sites and pragmatic SEO for product/security launches. Happy to share a tight 10-point GTM checklist if useful.