Vulnerability Management: Beyond Scanners and Reducing Risk
STORY INLINE POST
Recently, a couple of critical vulnerabilities for React were published, which quickly became a media trend both on social networks and in a torrent of emails, guides, and specialized videos detailing how to detect the vulnerability.
Surely the most publicized CVE is CVE-2025-55182 and CVE-2025-66478, known as “React2Shell.”
I am not trying to compete with all those publications on how to detect it, but to understand the reasons why your organization may be at risk, but has not detected it.
One of our clients, after the CVE was added to CISA’s Known Exploited Vulnerabilities Catalog, which is a list that continually adds vulnerabilities that are being actively exploited by malicious actors, and which I recommend you follow continuously, immediately started its “Emergency” process.
These “Emergency” processes are usually normal in all organizations that have mature vulnerability management processes. Basically, what it means is that it is understood that a vulnerability is directly affecting the organization. Therefore, it is necessary to detect and remediate the vulnerability on all assets in a short period of time, which is generally 24 hours. That is why it is called an “Emergency.”
So far, everything seems fine, we might think. However, when extracting the information of the vulnerabilities detected during that day from their reporting system, which in turn is fed by a vulnerability scanning tool, the results were zero.
“Surely it is because the vulnerability was reported today, which means it will take time for the scanner vendor to add the CVE to its database, and we can have a result. Let's wait until tomorrow,” the client determined.
It is very important to mention that this approach, although it may sound reasonable, means that if the vulnerability really exists, it will give a window of at least another 24 hours in which there is no action taken. As a result, a malicious user can actively exploit the vulnerability. Twenty-four hours doesn't seem like a lot of time, but in many cases, the exploitation of these vulnerabilities can become something as simple as entering Shodan and doing a search of all vulnerable assets exposed to the internet and attacking them automatically, so 24 hours is no small matter at all.
The next day they again reviewed the information, and the results were again zero. They formally closed the “Emergency” process under the conclusion that the vulnerability did not affect the organization. A couple of hours later, the SOC reported two compromises using the vulnerability.
What failed?
The vulnerability management process of this company, and of a large number of organizations, is based on information generated by scanners, these scanners usually send the information to some repository, either through some vulnerability management solution, or in many cases, simply with Excel and a lot of manual work time.
I have talked a lot with these teams, and I believe that their mistake is seeing everything as numbers.
Most of the vulnerabilities they manage are vulnerabilities obtained from network scans, in which the version of a vulnerable software is obtained, and then an update must be downloaded to remediate it. For these teams the complexity lies in doing it in the shortest possible time, in reducing numbers, in avoiding manual interventions when it is an end user who must apply the update, or in some lost device, although nobody knows where it is but it must be updated anyway.
Although these internal teams are usually excellent professionals when handling information, they forget where the information comes from, and the rule that every hacker has learned since starting out is this: “Organizations need to defend every entry point, a hacker only needs one entry point to compromise everything.”
These React vulnerabilities were not and will not be detected by a network scanner because these vulnerabilities execute in an application-level component that requires identification by a dynamic application scanner or by a component scanner.
So, was the organization’s mistake trusting their scanner? In that case, then it is only a matter of budget and implementation of a new scanner, or additional modules for the current one.
This is not practical, unless you want to acquire a new scanner every time a new vulnerability appears. The mistake is that vulnerability management teams understand the least about vulnerabilities. They are so used to defining goals, reducing numbers, and subtracting exceptions from open vulnerabilities that they forget the most important thing: the nature of vulnerabilities.
It is enough to simply read the CVE (https://react2shell.com/) to realize that no network scanner would detect this vulnerability, but that there are multiple scripts, practically all of them free, that could easily scan large ranges of addresses to obtain the information, send it to the widely used Excel, and deliver a result almost as reliable regarding which devices are vulnerable and which are not.
This is a particular case, but how many times do we find teams constantly chasing administrators to apply typical patches such as Windows updates? And their information is wrong in many cases.
Practical example: if a Windows patch that includes remediation for 100 vulnerabilities includes a CVE, that CVE will be reported by the scanner, whether vulnerable or not. It does so because it detects that the patch is not installed and therefore triggers the alert that the device is vulnerable; however, this is not necessarily correct. It is one thing to not have the critical update and another very different thing to be vulnerable to a specific vulnerability.
But in the reports, unfortunately it looks the same. In addition, this creates great tension with internal teams that simply cannot keep up to remediate all vulnerabilities to reach the ambitious reduction goals. These are not bad, and it is good to have them, but instead of simply considering the severity of the vulnerabilities to reach the goal, they forget that they could group vulnerabilities by functionality, and remediate much faster. In some cases, they could apply compensating controls that reach such a low residual risk that it is acceptable not to remediate the vulnerability no matter how critical it is.
And without a doubt, among the biggest problems I see in vulnerability management processes are risk acceptances or exceptions. A rule commonly established in vulnerability management processes is “either remediate or create an exception.”
It looks very good when we can reduce numbers by subtracting the number of open vulnerabilities from those that have an exception; however, a hacker or malicious actor will not care if a vulnerability has an exception or not, they will exploit it anyway. And if there is no correct analysis of the risk and impact of vulnerabilities, we will only be filling ourselves with exceptions “in compliance.”
In summary, here is a list of the different points to take into account to review regarding our vulnerability management processes and teams:
- Vulnerability management teams must be more than simple reporting teams; they must be teams that, although they will not technically carry out remediations, must have a clear understanding of the nature of the vulnerabilities they manage.
- An exception or risk acceptance does not mean that we are free of the problem. It simply means that we have it documented and understood, but it is necessary to establish realistic goals for its remediation, and if there are vulnerabilities associated with the same asset, even when they already have an exception; document again to identify possible new risks.
- Although the center of the entire process is detection, and therefore a solution, it is not mandatory that the entire process be summarized in obtaining information from that vulnerability detection solution. The process must be nourished by other tools, such as those used by the SOC, and even improvising things such as scripts, manual verifications, integration tools, and external catalogs. For this, communication between different teams is vital.
- SLAs exist for a reason, but it is vital to understand that the complexity of implementing a fix can make these SLAs, in some cases, simply unrealistic. That can be an exception, to ensure that technical teams have enough time for remediation, without affecting anything operational during the process.
- Reduction goals are necessary for compliance, for policy, and for processes, but those who establish them must be people involved in all the general processes of information security and cybersecurity to prevent them from being unrealistic and reduced to nothing more than dashboards that look good.





