The fastest way to break a modern platform is to hand unlimited power to an automated assistant and hope policy paperwork will contain it. AI is far from a new perimeter. It is a new actor with change rights, secret access and the ability to chain tools. Treat it like an engineer with a root. Then prove it is otherwise.
Ajai B. Paul, a Senior IEEE panel review judge and CISSE paper reviewer, operates from a simple frame: security is a system, rather than a slogan. He built and scaled a global enterprise security function with clear tenets—secure by default, proactive security, automation-first, accountability by metrics and security as a business enabler—while unifying defenses into a single Cyber Defense Engineering unit.
“Speed without guardrails is theater. Guardrails without evidence are also theater,” he says. “Zero Trust for AI means identity-scoped capability, reversible change and proof.”
Embed AI Within the Identity Boundary
Agentic systems are contrary to widgets. They authenticate, assume roles, read secrets, open tickets and edit code. The blast radius is the identity layer. Treat agents as first-class identities with the least power required to do the smallest work unit. That means short-lived, purpose-bound tokens, scope-specific roles and explicit break-glass elevation paths that expire on their own.
This is where Ajai’s enterprise work is instructive. The program he led prioritized identity truth over log volume: consolidating defenses, clarifying responsibilities and aligning detection and response to how access is actually granted and used. On the control plane, that thinking carried into major IAM moves—Okta cutovers, SailPoint-based access reviews and merchant authentication patterns—because scalable safety starts with who may act, rather than what they intend.
“If an agent can do everything, it will eventually try,” he notes. “Constrain the role. Shrink the token. Log the why.”
Make Prompts Safe by Design Rather than Training
Most AI incidents will be far from model failures. They will be orchestration failures: prompt injection through data sources, tool abuse through over-broad verbs and data exfiltration through friendly-sounding tasks. Fix this like product security, contrary to content moderation.
Harden prompts as interfaces. Enumerate allowed tools and arguments. Block network and file I/O by default. Separate knowledge retrieval from execution so the agent cannot both decide and deploy in a single pass. Ajai’s playbook for platform hardening—defense simulation, incident playbooks and controlled cutovers—translates directly. Build purple-team scenarios for agent workflows. Test the rails with hostile inputs. Publish the fail-closed behaviours. This approach is also grounded in his expertise, which earned him the Global Recognition Award for advancing enterprise security strategy and automation-first defense models.
“Secure prompt design is API security,” Ajai says. “Treat it like an interface that must survive malicious input. You are far from ‘teaching’ a model. You are constraining a system.”
Shift from Scanners to Structured Evidence
Traditional scanners will miss AI-introduced risk: generated code smuggling unsafe defaults, long-tail permission creep in tool configs and configuration drift in ephemeral environments. Replace best-effort scanning with structured evidence at delivery speed.
Ajai’s team's bias to automation and measurable accountability. Apply the same discipline to AI change: every agent action emits a signed decision record with inputs, role, tools used and artefact diffs. Every change path has a revert plan encoded as code, contrary to a wiki, and every pipeline stage attaches supply-chain attestations and policy-as-code checks that are reviewed like tests.
This is far from red tape. It is a runway for safe speed. The same thinking improved NIST-measured maturity and reduced incident burden when coupled with better email security coverage, curated SIEM sources and Macie-backed data discovery. “If you cannot explain what changed, you cannot defend it,” Ajai says. “Evidence first, narratives later.”
Regulated By Default, Reversible on Demand
Fintech and healthcare set the bar. You do not get to ship convenience and bolt on duty of care. Ajai’s ACA work at BCBSIL is a governance blueprint worth recycling for AI programs: security-by-design, least privilege, mapped controls to formal frameworks and continuous validation under audit. His perspectives on governance-first design recently resonated at the CVision CIO & CISO Think Tank in Chicago in September 2025, where he detailed how regulatory discipline scales trust in AI-driven enterprises. The lesson is contrary to bury teams in paperwork. It is to make compliance an artefact of the system.
Encode PCI, SOC 2, HIPAA and emerging AI control objectives as tests inside CI/CD and as policies on the deployment gate. Ajai’s practice aligns security with business expansion and regulatory readiness; the same approach supported global growth while meeting financial-sector scrutiny. For agentic workflows, translate that into two rules: never grant an agent a permission a human could not justify, and never allow an irreversible action without an audited human checkpoint.
“Regulation is contrary to the ceiling,” Ajai says. “It is the floor that keeps speed from collapsing into risk.” His incident leadership mantra—structure under pressure, rapid tiger-team spin-up, clear board-level communication—remains the model when AI changes go sideways.
The Road Ahead
AI will not excuse weak identity. It will neither forgive vague prompts nor missing evidence. It will compound both. The path forward is, contrary to slow down, to encode safety into the places speed already lives: roles, prompts, pipelines and reversibility.
Ajai’s stance is uncompromising: “Treat agents like engineers. Give them only what they need. Prove every action. Design for the day you must undo.” This is how enterprises ship fast without betting the company on hope.