A new extensive research paper titled “Improving Vulnerability Remediation
Through Better Exploit Prediction” reveals the number of discovered vulnerabilities in the past ten years (between 2019 and 2018), and also shares the percentage of actively exploited flaws.
Surprisingly, only 4,183 of 76,000 vulnerabilities in the said period have been used in attacks in the wild. The research has been conducted by researchers from Virginia Tech, Rand Corporation and Cyentia.
The researchers’ dataset consists of 9,700 published exploits, and 4,200 exploits observed in the wild.
Approximately 12.8% of all vulnerabilities between 2009 and 2018 had published exploit code, while only about 5% of all vulnerabilities were exploited in the wild. Furthermore, only about half of all exploited vulnerabilities have associated published code. The research highlights this as an important finding because it suggests the need for an improved approach to vulnerability remediation.
Why was the research conducted in the first place?
The research team focused on the adoption of vulnerability management in organizations. As it turns out, despite decades of research and technical innovations, there have been few advances in remediation practices, and currently most organizations have more vulnerabilities than resources to fix them. The implementation of remediation strategies against severe vulnerabilities is more needed than ever.
According to the report, “one of the key reasons the current approaches are ineffective is that firms cannot effectively assess whether a given vulnerability poses a meaningful threat”. Statistics from previous years show that as few as 1.4% of published vulnerabilities have exploits which have been observed in the wild.
Given that so few vulnerabilities are actually a focus for attackers in the real world, a promising approach towards remediation is to identify vulnerabilities which are likely to be actually exploited, and therefore prioritize firm efforts towards remediating those vulnerabilities first.
To no one’s surprise, vulnerabilities rated with high severity scores are the ones that are most exploited. More specifically, these are flaws with a severity score 9 or higher (10 being the highest score), and are the easiest to exploit.
It is important to note that the researchers used multiple datasets collected in partnership with Kenna Security, a large, U.S.-based vulnerability and threat management company. The researchers also used a dataset of vulnerabilities published by MITRE’s Common Vulnerability Enumeration (CVE) effort in the period between 2009 and 2018.
Data concerning exploits discovered in the wild was collected from FortiGuard Labs, with evidence of exploitation being gathered from the SANS Internet Storm Center, Secureworks CTU, Alienvault’s OSSIM metadata, and ReversingLabs metadata.
Information about written exploit was taken from Exploit DB, exploitation frameworks (Metasploit, D2 Security’s Elliot Kit, and Canvas Exploitation Framework), Contagio, Reversing Labs, and Secureworks CTU, with the research team finding 9,726 proof-of-concept codes published in the said period.
Finally, with the help of Kenna Security, the team was able to understand the prevalence of each vulnerability extracted from scans of hundreds of corporate networks and derived from vulnerability scanner information.