The top news stories from the United Kingdom

Provided by AGP

Coalition for Secure AI Unveils New Agentic Identity and Security Research Following High-Profile Sessions at RSAC 2026

Building on the momentum of RSAC, CoSAI’s latest work delivers critical frameworks to help organizations navigate the evolving AI security landscape

BOSTON, MA, UNITED STATES, May 6, 2026 /EINPresswire.com/ -- Following a high-profile presence at RSAC Conference 2026, the Coalition for Secure AI (CoSAI), a global, multi-stakeholder initiative advancing the security of AI systems, is further expanding its industry guidance with the release of two new research papers. Together, these efforts reflect CoSAI’s broader push to advance practical, real-world approaches to securing AI systems through both technical guidance and industry engagement.

The publications, Agentic Identity and Access Management and The Future of Agentic Security: From Chatbots to Autonomous Swarms, examine two defining challenges of this era: adapting identity and access control to increasingly autonomous, machine-driven environments, and keeping pace with AI agents that act, decide, and spawn further agents at machine speed. They address a question that dominated discussion at RSAC 2026: as autonomous agents become an operating layer of the enterprise, how can organizations extend security principles designed for humans into systems increasingly run by machines?

On the RSAC Stage: From MCP Threats to Enterprise Defense

RSAC Conference 2026 marked a clear inflection point for agentic AI security, with CoSAI playing a visible role in the conversation. Two standing-room sessions brought together practitioners, architects, and CISOs grappling with a shared reality: the enterprise perimeter has moved from the network's edge to the AI agent’s actions. Traditional security is no longer enough when autonomous agents are empowered to act, spend, and share data on a company's behalf.

In the session “OASIS CoSAI: Addressing What’s Next in Securing Enterprise AI,” CoSAI Technical Steering Committee co-chairs Akila Srinivasan, Anthropic, and J.R. Rao, IBM Fellow and CTO, Security Research at IBM, outlined a comprehensive roadmap for securing the enterprise AI lifecycle. The session highlighted how threats such as backdoored coding assistants and malicious model artifacts are eroding traditional security boundaries, requiring a fundamental rethink of identity, privilege, and data controls.

In response, CoSAI presented a layered, vendor-neutral defense strategy spanning supply chain security, secure agent design, and emerging standards such as Open Model Signing and secure agent gateways, equipping organizations to move from reactive defense to proactive resilience.

“Forty-plus organizations, including direct competitors, are collaborating inside CoSAI because we understand that the threat landscape doesn’t respect company boundaries. Neither can our defenses,” said J.R. Rao.

To read more detail about the RSAC session, read this recap blog post on CoSAI’s website.

Another CoSAI session, “Securing MCP: Mitigating New Threats in Agentic AI Deployments,” with CoSAI’s Workstream 4 co-lead Sarah Novotny, Klever.co, and Jason Clinton, Deputy CISO, Anthropic, focused on the emerging risks within the Model Context Protocol (MCP). They introduced a clear threat taxonomy, from identity misuse and context tampering to supply chain compromise, and paired it with practical, zero-trust authentication approaches that organizations can implement today. Their central message: as AI agents become context-aware intermediaries, the protocol layer itself becomes a critical and exposed attack surface. Read more in this detailed recap blog post (https://www.coalitionforsecureai.org/after-rsac-2026-the-mcp-security-question-everyone-kept-asking/)

CoSAI’s RSAC sessions crystallized a theme that carries through both new papers: identity is no longer a solved problem. As agents operate with increasing autonomy, traditional models of identity, access, and control must evolve. CoSAI’s latest research translates these insights into actionable frameworks, beginning with a deep dive into agentic identity and access management.

Securing the AI Actor

The Agentic Identity and Access Management guidance tackles a foundational challenge: without a trustworthy, machine-readable identity for every agent, no other security control can be reliably enforced. Developed by CoSAI’s Secure Design Patterns for Agentic Systems Workstream, this framework provides a practical roadmap for assigning, verifying, and governing the identities of autonomous AI agents across the enterprise.

The guidance outlines how to assign unique credentials to agents, limit their access to only what’s needed for a specific task, and maintain clear visibility into who—or what—is taking action across systems and how that access was delegated. It reinforces a central takeaway from RSAC: organizations can extend their existing identity and access management foundations to securely support autonomous AI safely, without starting from scratch.

“Organizations are rapidly deploying AI agents, and identity and access control models need to keep pace. At the same time, valid identity alone is insufficient—credentials can be correct while outcomes are still harmful,” said Ian Molloy, Workstream co-lead, IBM. “This Agentic Identity paper defines how to prove an agent’s identity, continuously verify what it should be allowed to do, and how to safely delegate permissions, while enabling organizations to extend the identity and access management solutions they already trust.”

The new research builds on CoSAI’s earlier MCP Security taxonomy and the Principles for Secure-by-Design Agentic Systems published in 2025. Together they form a layered blueprint: principles define intent, MCP Security addresses the protocol layer, and Agentic Identity and Access Management governs the trust layer that everything else depends on.

Securing the Age of Autonomous Swarms

Agentic security was a breakout theme at RSAC this year, and for good reason. As organizations move beyond AI assistants toward fully autonomous, multi-agent systems capable of independent action across enterprise infrastructure, traditional security controls are struggling to keep pace. The Future of Agentic Security: From Chatbots to Autonomous Swarms examines how AI agents capable of coding and coordinating across sensitive systems shift the attack surface to the semantic layer, where traditional controls like static access lists and pattern-based monitoring are no longer effective.

The research identifies two unsolved problems: intent-based authorization — the inability to reliably evaluate and govern what an AI agent is actually trying to accomplish in natural language — and the semantic mosaic effect, where agents can synthesize and expose sensitive insights from innocuous sources without ever triggering conventional leak protection. These are not gaps that incremental improvements to existing security tooling will close.

To help organizations get ahead of these risks, CoSAI outlines a framework for secure agentic architecture, including ephemeral environments, dynamic credentialing, and a new category of defense: Agent Detection and Response (ADR). The core message for executives is clear: the window to build the right security infrastructure before widespread agentic deployment is narrowing, and the clock is already ticking.

Mary Beth Minto
OASIS Open
email us here

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:

Sign up for:

News Watch: United Kingdom

The daily local news briefing you can trust. Every day. Subscribe now.

By signing up, you agree to our Terms & Conditions.