AI Agents Expose Critical Gaps in Cybersecurity
Home > Cybersecurity > Article

AI Agents Expose Critical Gaps in Cybersecurity

Photo by:   Free Pik
Share it!
Diego Valverde By Diego Valverde | Journalist & Industry Analyst - Thu, 10/09/2025 - 09:10

Investment in AI has become the top cybersecurity priority for organizations globally. However, the rapid evolution of these technologies into autonomous agents has exposed fundamental security gaps in identity and access systems designed for humans, creating a new and complex risk landscape.

The root of the problem lies in a paradigm shift: AI is transitioning from a tool that responds to one that acts. This autonomy creates a critical vulnerability because systems cannot distinguish if an action was executed by a user or by a software agent acting in their representation.

 “AI agents are outpacing our security systems. Without industry collaboration on common standards, we risk a fragmented future where agents cannot work securely across different platforms and companies,” writes Tobin South, Co-Chair, AIIM Community Group of the OpenID Foundation.

A global survey by PwC of 3,887 business and technology executives reveals that 60% of organizations plan to increase their AI-focused cybersecurity investments over the next 12 months, primarily for threat hunting and anomalous behavior detection. This investment push responds not only to the need to counter increasingly sophisticated attacks but also to the urgency of managing the risks inherent in AI adoption itself.

The main challenge arises from the inability of traditional Identity and Access Management (IAM) infrastructure to govern these non-human actors. When an AI agent books a flight or manages a business workflow using a user's credentials, it creates an accountability gap that makes forensic investigations difficult or impossible in the event of an incident. In the “Identity Management for Agentic AI” white paper, the AI Identity Management Community Group (AIIMCG) from the OpenID Foundation identifies urgent vulnerabilities that threaten the agent ecosystem.

Systems face significant challenges in ensuring accountability and secure delegation when agents act on behalf of users. Actions performed by agents are often indistinguishable from those of the user, making auditing unreliable. To address this, the concept of “delegated authority” through On-Behalf-Of (OBO) flows has been proposed, granting agents distinct credentials that clearly indicate when they act for a user, creating an auditable trail. However, as advanced agents delegate tasks to sub-agents, complex authorization chains emerge, risking violations of the principle of least privilege. Implementing “scope attenuation” mechanisms that progressively reduce permissions at each delegation step can help mitigate this risk.

Authorization frameworks also struggle when agents operate across organizational boundaries or shared spaces. Protocols like OAuth 2.1 work well internally but fail when agents must interact with external platforms, leading developers to build custom, error-prone integrations. In collaborative environments, current standards do not account for multiple users with variable access rights, leaving gaps in shared-space security. Additionally, the growing number of permission requests can overwhelm users, causing “consent fatigue.” Intent-Based Authorization models, which allow users to approve high-level objectives rather than individual actions, can reduce unnecessary interruptions while maintaining oversight for high-risk operations.

Emerging agent behaviors further complicate identity and security management. Some agents bypass APIs entirely by controlling browsers directly, necessitating new solutions like Web Bot Auth and Workload Identity frameworks to authenticate agents safely. The proliferation of proprietary, incompatible identity systems creates fragmentation, adding complexity and vulnerabilities. 

Moreover, advanced agents now operate in dual modes, either independently with their own credentials or on behalf of a user with delegated authority. Existing systems struggle to manage this duality effectively, applying the correct permissions and maintaining security across both modes. Collectively, these challenges underscore the urgent need for standardized, flexible mechanisms for agent identity, delegation, and authorization.

The report urges developers to align with emerging enterprise profiles. It also calls for standards bodies to accelerate interoperable protocol development, and for companies to treat agents as "first-class citizens" in their security infrastructure.

AI agents will inevitably become a primary target for cyberattacks, says Matt Gorham, Leader, Cyber and Risk Innovation Institute, PwC. If compromised, these agents could access sensitive data if not assigned the least privilege possible. The industry is in an AI arms race and building a robust identity infrastructure for agents is not just a technical upgrade but a fundamental requirement to secure the future of business automation.

Photo by:   Free Pik

You May Like

Most popular

Newsletter