This blog will contain a host of informations about various vulnerabilities and thoughts related to vulnerability management.
To view older blog posts, please visit the archives section.
2025-01-25
The Common Vulnerability Scoring System(CVSS) is a widely used way for vulnerability management teams (VMT from now on) to quickly assess the importance of various vulnerabilities. Convenient, fast, and informative, it has been around since 2005 and is very well known by all IT security professionals. But, is it really that well known? Should we really be using CVSS to the extent the community currently uses it?
Opinion: If your VMT bases its actions and efforts solely on the CVSS score, they are doing it wrong.
Even more so, not only are they doing it wrong, it's costing your business a lot of money and exposing you to unweighted risks. Let's explore the question from a critical standpoint.
By itself, CVSS scoring is not a problem at all. The issue comes in when the criticality of a vulnerability is assessed purely based on this score or when it is used, as it's often the case, for initial vulnerability triaging.
The score itself does not take into account your business or even the legal framework your business is evolving within. Given this, it is even farther from taking into account the specificities of your various systems, legacy or freshly out of development.
If you live in the province of Quebec, security of information laws have recently been revamped, mostly based on European Union GDPR. Let's see how relying solely on CVSS scoring, in this context, could get you in trouble if your VMT team was to rely, just a bit too much, on it for initial vulnerability triaging.
Some might not know but, there exists multiple versions of the CVSS scoring system. For the sake of the argument, let's pretend that an imaginary organisation has a policy to only evaluate vulnerabilities that fall within the "high and critical" ranges of the scoring systems. Luckily for us, V3.0, V3.1 and V4.0 (all still widely referred to) use the same scale to determine the criticality level of a vulnerability. The scale goes as:
Critical: 9.0 to 10.0
High: 7.0 to 8.9
Medium: 4.0 to 6.9
Low: 0.1 to 3.9
None: 0.0
Simple enough! Now let's tie this to an imaginary vulnerability with a calculated CVSS 3.1 score of 5.9. In this case, the vulnerability would be tagged as medium. According to our imaginary business policies, our VTM would set this one aside and keep triaging through vulnerabilities in search for more impactful vulnerabilities.
Great! The system worked! No effort was spent evaluating lower-risk vulnerabilities... Except... that eighteen months later, your business is hit with a punitive fine of 10 million dollars or 2% of your business's gross sales. All on the basis that you did not comply with your legal obligation to protect the data your clients are trusting you to safeguard. Oups.
Prioritizing high and critical maked vulnerabilities is a logical choice and surely is a best practice? Right?
No, it's not—at least not if your process relies solely on the CVSS score of a given vulnerability. Let's look closer at our make pretend vulnerability to understand what happened.
The vulnerability was evaluated using CVSS 3.1 and was given a 5.9 score; a medium vulnerability. However, what the score hides is that the vulnerability, a remote but highly complex vulnerability, had a high confidentiality impact. Because of the vulnerability characteristics (AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N), its score under CVSS 3.1 results in a 5.9. The potential impact of the vulnerability was effectively hidden by the score. Had the same vulnerability been scored under CVSS V4.0, with the exact same characteristics (CVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:N/VC:H/VI:N/VA:N/SC:N/SI:N/SA:N), it would have received a much higher score of 8.2 (you will have to believe me as they don't provide a linkable version on this one) and would therefore have been classified as a high vulnerability.
In this other case, your organization process would therefore have caught the vulnerability and would have prioritized it to be fixed.
General rule of thumb, somewhere along the line of 30 to 50% of newly scored vulnerabilities are using CVSS V4.0 (I have not run the statistics on this but it seems to be a general ballpark). The rest is still very much scored using CVSS V3.X. The provided CVSS score on a CVE can originate from a multitude of sources. Sadly, not all of these have the same quality. Let's have a closer look at how CVEs CVSS scoring actually works.
Simply to prove a point, we will be exploring 2 vulnerabilities. One that was initially under evaluated, another one that was initially over evaluated. The first one will be CVE-2022-22965 better known as Spring4Shell, a remote code execution vulnerability affecting the Java Spring framework. The second one will be CVE-2025-22376, a Perl OAuth library vulnerability which I’ve recently assessed.
When CVE-2022-22965 was originally published, some reports assessed its score to be a “7.5”. Effectively a “high” vulnerability. The issue here is that Spring4Shell was assessed with both CVSS 2.0 and CVSS 3.1, as is shown in its history data. Therefore, if your VMT used the CVSS V2.0 score (or a tool that used V2.0), unaware of the difference between V2.0 and V3.1. They could reasonably have set it aside in favour of another, “9.8” more critical vulnerability. The problem here is that CVSS 2.0 does not have a “critical” classification for any values on its scale. The higher level is high. Also, just like there are differences in the scoring from V3.1 to V4.0 there are also differences between V2.0 and V3.1. Under CVSS V3.1, Spring4Shell was scored as a 9.8 critical vulnerability. Now, sometimes, the opposite can happen. It is possible for a vulnerability to be over evaluated.
A couple of weeks back, I published an independent analysis of the CVE on this very blog. The conclusion of my analysis was that the original CVSS V3.1 score of the vulnerability, 9.8(critical), was much too high and should have been around 5.3(medium). I’ve since been in touch with CISA about this. The exchange resulted in an update to the CVSS score from 9.8 to 5.3. In this case, relying solely on the CVSS score, could very well have resulted in an emergency situation to roll out a fix for this vulnerability. That would have resulted in your VMT and development teams to invest efforts in what could possibly have been the wrong asset, at the wrong time, in the business.
Warning: The following paragraph is based on multiple assumptions on my part.
Was the CISA analysis wrong? No, not really, it was likely based on their reality: they can’t put the exact same effort on every single vulnerability that passes through them. The volume of vulnerability is simply too high. In some cases, they surely must have to rely on general, likely automated, and slightly out of context information.
If my relationship with the CVSS score were a Facebook status, it would be: "It’s complicated."
It’s complicated because of our industry requirements. We need an approach to easily sort vulnerabilities. We also need to be able to do so while relying on an average employee skillset. On top of this, we need to stay afloat with our IT security budget all while making sure that we are respectful of the legal framework inside of which our businesses are evolving.
The combination of these three factors makes this problem rather hard to solve.
The issue also underscore an educational issue surrounding things that the security community takes for granted. As stated by the NIST, CVSS is not a measure of risk. It is however a way to describe a vulnerability that does, when properly filled, expose the potential severity of a vulnerability. The score is a byproduct. The real valuable information is the data used to calculate the score.
I believe that, at this point in time, the best approach available to organizations is to gather critical thinkers (the smart kind, not the loudmouth kind), and evaluate the general needs of a given business. A certain level of focus should probably be put towards a continuous education plan based on deep understanding on the basics of security, not to be confused with overspecialization. On top of this, VMTs probably need to approach the issue with a disproportionate amount of humility, as there are multiple “unknown unknowns” when it comes to vulnerability management. We might also want to completely ditch the CVSS score itself and start looking, first and foremost, at the Confidentiality, Integrity and Availability impacts, along with the other provided CVSS metrics, as these most definitely provide a better starting point in vulnerability triaging
I strongly believe that the best is yet to be seen when it comes to vulnerability management. With AI quickly becoming an ever present reality, new and holistic approaches will become available over time. This will be a major shift in paradigm as deep analysis will be readily available for what would account to pennies on the dollar. Not to forget: it will be available. Apparently, there is a shortage of IT security professionals. However, take a deep and, hopefully, unbiased look at the general state of things… How many people do you really trust when it comes to vulnerability analysis? How often do you take a decision based on fear rather than confidence? I suggest we are lacking professionals with the skills required to assess vulnerabilities, at a technical level, who would then enable a better control of IT security spendings, and efforts, for businesses.
Let's end this with a good old metaphor.
You can appreciate a picture of a sunrise in a valley, but until you stand there and witness it yourself, you will never truly understand its beauty.