Home > Cybersecurity > Expert Contributor

Agentic AI: A Path Toward Cybersecurity’s Light or Dark Side

By Erik Moreno - Minsait
Director of Cybersecurity

STORY INLINE POST

Erik Moreno By Erik Moreno | Director of Cybersecurity - Wed, 06/04/2025 - 07:30

share it

The 2025 edition of the RSA Conference, one of the world’s most prominent forums on cybersecurity, marked a turning point by placing at the center of discussion a concept poised to become the new axis of technological development in the coming years: Agentic AI. This term represents a disruptive evolution in the use of artificial intelligence and has captured the attention of researchers, startups, industry leaders, and security experts alike.

Unlike traditional artificial intelligence models such as LLMs (Large Language Models), Agentic AI focuses on creating autonomous agents capable of executing complex tasks on their own, with minimal human intervention and based on simple instructions. During the conference, it became clear that the immediate future foresees a proliferation of these specialized intelligent agents, capable of acting with near independence.

What exactly is Agentic AI? It is a functional layer built on top of AI models that allows for the generation of code, execution of activities, response generation, task coordination, and the combination of various technological components, achieved through simple prompts or even natural language. It is a form of artificial intelligence that acts, decides, and collaborates actively, which brings profound implications for the field of cybersecurity.

 

A Turning Point

In this context, Agentic AI represents a watershed moment. On the one hand, it promises to revolutionize productivity, automation, and threat response capabilities. On the other hand, it introduces unprecedented security risks, placing increasingly sophisticated tools in the hands of both defenders and attackers.

By 2027, Agentic AI is expected to cut in half the time it takes to exploit a vulnerability. This acceleration in offensive capabilities puts organizations on the defensive, forcing them to rethink their defense strategies, risk analysis models, and technology investment priorities.

Yet the defensive power of Agentic AI is equally undeniable. No other technology tool has ever been so effective in creating secure code, automating threat detection, or orchestrating real-time incident responses. These agents can act as digital co-defenders, working alongside human analysts to mitigate incidents even before they become detectable by traditional tools.

However, these advances do not come without significant risks. Automatically generated code by such agents may contain subtle errors that evade human review, potentially introducing critical vulnerabilities that could compromise the entire infrastructure of an organization.

 

The Rise of Malicious Use
 

Agentic AI is also beginning to be used to facilitate more sophisticated attacks. It has already been deployed to create highly personalized phishing campaigns capable of bypassing traditional security barriers, and even to generate adaptive malware that adjusts to its victim’s environment in real time.

In this regard, the attack vectors are evolving rapidly. Unlike the past, when attackers would launch broad malware campaigns hoping something would stick, today’s trend is targeted aggression: Agentic AI scans, filters, identifies the most vulnerable target, and executes a precise strike.

According to Gartner, 2025 has already seen a 200% increase in malicious tools linked to artificial intelligence compared to the previous year. This “boom” in AI-driven malicious tools demonstrates that attackers are also leveraging Agentic AI to enhance their evasion, obfuscation, and persistence capabilities within corporate networks.

This scenario has placed growing pressure on cybersecurity defenders, who are now facing attacks that are more massive, more intelligent, and significantly harder to detect. Many of these incidents are designed to generate AI-powered DDoS attacks or scalable ransomware campaigns, intensifying the burden on security teams.

 

Startups Driving AI Innovation
 

RSA Conference 2025 also marked the 20th anniversary of the industry's leading cybersecurity startup competition. The event recorded a 40% increase in applications, with over 200 proposals submitted, reflecting the growing innovation and dynamism in the sector.

What stood out the most was that all 10 finalists centered their innovations on artificial intelligence, underscoring the industry’s strategic focus on developing ethical, forward-looking solutions that harness AI’s potential. The winning project presented an open-source platform to automate the monitoring of attack surfaces, enabling proactive detection and remediation of vulnerabilities.

Among the other finalists were companies that developed real-time protection solutions for both applications and the AI agents themselves. One startup showcased an autonomous research platform that transforms security operations and improves incident response times.

Another project tackled one of today’s most pressing challenges: trust in AI. The startup applied advanced cryptography to ensure transparent and verifiable governance over the behavior of intelligent agents. Similarly, another emerging company introduced an access control model to guarantee that LLMs access only the information needed based on business rules.

Critical infrastructure protection was also represented, with a startup presenting an automated binary encryption platform focused on securing the most fundamental layers of systems. Another company introduced an AI solution for classifying and safeguarding sensitive data, helping to prevent information leaks.

In terms of access control, one finalist proposed a solution that ensures only company-authorized devices can access critical resources, strengthening security across networks, applications, and authentication through hardware-linked credentials.

One of the most noteworthy developments was an AI capable of managing end-to-end cybersecurity processes, aimed at closing the cybersecurity talent gap.

 

A New AI Era

Summarizing the key trends observed, six major themes are shaping the evolving relationship between AI and cybersecurity:

 

  • Transformation of security operations, with intelligent agents automating tasks, collaborating with human analysts, and enabling faster, more precise decision-making.

  • Emergence of new AI-generated vulnerabilities and the pressing need for open standards and regulatory frameworks.

  • Adoption by industry leaders, with companies like IBM, CrowdStrike, and Google spearheading AI-based solution development.

  • Governance and ethics, with a growing focus on responsibility in autonomous systems and alignment with human values.

  • Authentication and identity management challenges, especially given the increasing participation of non-human agents.

  • Global collaboration and standardization, which are vital for defining best practices, ensuring interoperability, and promoting transparency in the use of Agentic AI.

 

The rise of Agentic AI, its adoption by both defenders and attackers, and the dynamic innovation ecosystem surrounding these tools signal the beginning of a deep transformation in cybersecurity. The challenge now is twofold: to harness the power of these agents while never losing sight of the principles of security, ethics, and governance.

You May Like

Most popular

Newsletter