AI in Vulnerability Management: A Misguided Approach?
STORY INLINE POST
It's no secret that today all cybersecurity companies sell the use of AI in their operations. Just to name a few statistics, according to Cobalt, 18% of companies have adopted the use of AI in their internal teams this year, and 44% of them have rated solutions that use AI as more efficient than those that do not.
And although they differ, other published studies, such as Takepoint Research, reveal that at the user level, 80% of cybersecurity professionals use AI in their daily activities, and activity execution times in SOCs have been reduced by up to 30%.
However, despite these positive figures, according to Market & Trends, at least 21.9% of companies disagree with its use because they believe it puts the privacy of their information at risk.
But at Rent A Hacker, how have we perceived the use of AI? Interestingly, this month, with the launch of a new project, we have realized that there seems to be a misguided approach to its use in large organizations, specifically in vulnerability management.
Vulnerability management in large organizations with more than 2,000 employees is an extremely complicated issue. With such a large infrastructure, it is not possible to perform vulnerability analyses from a network-level perspective to identify public vulnerabilities and configuration errors. Instead, it is necessary to perform these analyses with more robust solutions that use agents, avoiding network congestion while increasing the efficiency with which deviations are detected.
The problem? Vulnerability analyses, no matter how efficient they may be and no matter how much AI they may have, will always have a level of error, which is normal and acceptable. But how do we manage those differences?
That is where we believe AI is misguided. Let's take a typical example.
We have a Cross Site Scripting (XSS) vulnerability in a commercial application, which has been assigned a CVE. Since this XSS has a CVE, it will be detected by the vulnerability analysis platform that uses an agent, but it will also likely be detected by the dynamic application security testing (DAST) tool that scans applications exposed to the internet or internally and has detected the vulnerability. At the same time, it will be detected by static application security testing (SAST), which will detect not only the XSS, but also duplicate the vulnerability to a reflected XSS, a stored one; and even by reducing its scope, it may detect an SQL injection, and encompass the vulnerabilities in a new vulnerability, such as Input Validation Error, but because it is a commercial product, it is quite likely that component analysis (CSA) will also detect it due to the use of the component in applications.
In other words, one vulnerability has become at least six. We are talking about very large organizations, where thousands of vulnerabilities are detected every day. This means that the numbers can become exponential.
How does this affect us? First, from a remediation standpoint, what should we remedy? Should we enter the application and modify the code? Should we update the component? Should we raise a ticket with the manufacturer? Who should remedy it, development or infrastructure, because it was found on a server?
When we look at a vulnerability management dashboard, these questions are difficult to answer.
Returning to our example, if we are talking about an XSS, we would think that development has to remedy it, but we must remember that a vulnerability is associated with an asset; that is, if we look at it from the perspective of vulnerability analysis, we are talking about a server IP address. If it is assigned to the Infrastructure team, they will say that they are not responsible, since the vulnerability is located in the application, not on the server. If we assign the XSS to the Development team, the vulnerabilities related to that server would also be assigned to them, such as patches, configurations, operating system vulnerabilities, which Development obviously cannot remedy, and we would have the same problem. Worse still, we may conclude that the IP address of the computer is the address of a balancer or firewall that exposes the application but not the internal computer, which means that our vulnerability is now in several places, with different people responsible, repeated six times, and that all the teams refuse to remedy it because it is not their responsibility.
Headache? Pause, because this is just the beginning.
Regardless of how the vulnerabilities are presented, whether in a specialized platform or in an Excel file or SQL database, decision-making levels will want to view these metrics in a unified manner, alongside metrics from other aspects of the business. Usually, these metrics are used in data analysis tools that use customized dashboards, which extract information from various sources. But from which sources should our XSS information be extracted for accounting? AV, DAST, SAST, CSA, the vulnerability management tool, or the ticketing system where the vulnerability was logged for remediation? I've seen quite a few executives upset because the metrics they view on their dashboards don't match the compliance reports from Information Security, Cybersecurity, Telecommunications, or supplier reports.
What to do? This is the problem where I believe AI is being partially or incorrectly applied.
If we present the details of this XSS to an experienced analyst, they will undoubtedly be able to determine almost immediately who is the most suitable person for remediation, as well as detect duplications and deviations from the finding. Even those even more experienced could determine if it is a false positive and should be completely discarded. But why isn't AI leveraging this expertise anywhere to perform the same analysis automatically, increasing efficiency?
What is the response that, again, I emphasize, large organizations have taken to monitor their vulnerabilities? That's right: giant Excel files into which they repeatedly copy tables with countless data, using filters, pivot tables, formulas, and accepting the deviations they know they will have, and assuming them as part of management.
I'm not lying: this week, I worked on a report of more than 600,000 records that had to be separated into 11 files for different teams. And from the outset, the responsible party was aware that the metrics have an error rate that is impossible to completely eliminate.
In the example, we're talking about XSS, but those who understand this problem will surely think of certificate-related vulnerabilities, or the classic critical findings that all tools report when a software version is no longer supported, and automatically trigger the metrics, even when other compensatory controls may exist for those older versions.
To date, of about five vulnerability management platforms I know of, none use AI for vulnerability management. They do so to auto-complete tickets sent to Jira, to generate compliance-oriented reports, and to generate reports on vulnerabilities that have exceeded their SLAs. But no platform identifies these common problems.
Interestingly, I invite you to visit job postings where the people responsible for vulnerability management, who spend their time generating those endless Excel files, currently have higher salaries than people focused on detecting vulnerabilities. These tools use AI to detect them. Curious, isn't it?
AI, AI, AI ... but is its adoption really addressing real needs, or are we simply accustomed to carrying margins of error in all daily activities, but now with AI?
I await your opinions and experiences.




By Carlos Lozano | CEO -
Fri, 06/27/2025 - 06:30

