Zero-trust security was designed for humans. The assumptions baked into zero-trust frameworks — continuous verification, least-privilege access, never trust the network — were built around the behavior of human users accessing enterprise systems.
AI agents are not human users. They don’t authenticate once and then work. They spawn dynamically, request broad permissions, communicate with dozens of downstream services, and operate at speeds that make human audit review impractical in real time. The security frameworks built for human users were not designed for this.
At RSAC 2026, two of the largest enterprise security vendors announced they’ve built new frameworks that address this gap — and both have chosen to frame the solution the same way: zero-trust, extended to agents as first-class identity principals.
Cisco: DefenseClaw and AI Defense Explorer Edition
Cisco’s RSAC 2026 announcement has two components that work together.
DefenseClaw is an open-source framework for applying zero-trust principles to AI agent deployments. The framework treats each agent as a distinct identity principal — not as a service account or a system user, but as an entity with its own identity, its own permissions scope, and its own audit trail.
DefenseClaw’s core model:
- Agents receive minimum-privilege permissions at spawn time, scoped to the specific task they’re executing
- Every action an agent takes is logged with the agent’s identity attached — not the human operator’s identity
- Permissions are temporary and revocable; agents don’t accumulate access over time
- Agent-to-agent communication is also subject to identity verification — an agent can’t simply trust another agent in the same deployment
The open-source release is strategically significant. Cisco isn’t trying to own the agent identity standard — it’s trying to establish one, and open-source is the fastest path to adoption.
AI Defense: Explorer Edition extends Cisco’s existing AI Defense product into agent runtime monitoring. Where traditional security tools look for known threat signatures, AI Defense Explorer monitors agent behavior for anomalies: unusual access patterns, unexpected external calls, permission escalation attempts, and output patterns that suggest the agent has been manipulated (prompt injection being the attack surface most specific to AI).
Microsoft: Zero Trust for AI Across the Model Lifecycle
Microsoft’s RSAC announcement extends its existing Zero Trust architecture to cover the full AI model lifecycle — from training and fine-tuning through deployment and runtime operation.
The Microsoft framework treats agents as first-class identity principals in the Azure Active Directory ecosystem, which means:
- Agents can be assigned identities in the same directory as human users and service principals
- Conditional access policies — the rules that determine whether a user or service can access a resource — can be applied to agent identities
- Agent actions flow through the same audit logging infrastructure as human actions, making compliance reporting consistent
The “full model lifecycle” scope is broader than Cisco’s agent runtime focus. Microsoft is applying zero-trust not just to deployed agents but to the training infrastructure, the model weights, and the fine-tuning pipelines — treating every stage as a potential attack surface.
This reflects Microsoft’s position as both an AI infrastructure vendor (Azure) and an AI application vendor (Copilot). The security framework has to cover both what Microsoft deploys and what customers build on top of Azure.
The Convergence That Matters
Both Cisco and Microsoft landing on the same conceptual framework — agents as first-class identity principals, least-privilege access, runtime monitoring — at the same conference isn’t coincidental. It reflects a genuine consensus forming around what agent security requires.
The shared architecture has three pillars:
Identity. Agents need to be identifiable. Not just “this is a Copilot instance” but “this is the specific agent instance spawned at 14:32 by user X to perform task Y, with permissions Z.” Without granular identity, audit trails are meaningless.
Least-privilege. Agents should receive exactly the permissions they need for the current task and nothing more. The current norm — agents running with service-account-level access that covers a broad permission scope — is the wrong default.
Runtime monitoring. Static permission grants aren’t sufficient because agents behave dynamically. Runtime monitoring catches permission misuse, unexpected behavior patterns, and compromise indicators that pre-deployment configuration can’t anticipate.
What This Means for Teams Building Agents
The RSAC 2026 announcements from Cisco and Microsoft signal that the security community has moved from “AI agents are an emerging threat surface” to “here are specific architectural countermeasures.”
For teams deploying agents to enterprise customers, this creates both opportunity and pressure. The opportunity: adopting the Cisco/Microsoft frameworks now positions your agent architecture as security-compliant before enterprise buyers make it a procurement requirement. The pressure: enterprise buyers at RSAC this week are hearing that agent security frameworks exist and that their vendors should have them.
Teams building agents on Azure have a relatively clear path — the Microsoft framework integrates with existing Azure identity infrastructure. Teams building on other platforms will find DefenseClaw’s open-source framework a more portable starting point.
The window between “security framework announced” and “security framework required by enterprise procurement” has historically been shorter than most product teams expect.
Cisco’s DefenseClaw and AI Defense: Explorer Edition were announced via the Cisco Newsroom. Microsoft’s Zero Trust for AI announcement was published on the Microsoft Security Blog. Coverage was confirmed by SiliconAngle and BizTech Magazine.