An analysis of 24 zero-day vulnerability exploits discovered in 2020 revealed that a quarter of them appeared to be closely related derivatives of previously known exploits – meaning they have have been prevented in the first place, had the original bugs been patched correctly.
The findings, from Google Project Zero, highlight a troubling habit that software developers can sometimes fall into: hastily scramble to issue an urgent vulnerability patch, only to move on to the next issue without fully grasping the underlying cause or crafting a wholistic fix. In some cases, the original patch didn’t even work correctly.
In certain instances, malicious actors simply tweaked a couple of lines of code in order to “revive” a particular exploit method in a slightly different form, according to a Project Zero blog post by security researcher Maddie Stone.
“When exploiting a single vulnerability or bug, there are often multiple ways to trigger the vulnerability, or multiple paths to access it,” Stone wrote. “Many times we’re seeing vendors block only the path that is shown in the proof-of-concept or exploit sample, rather than fixing the vulnerability as a whole, which would block all of the paths. Similarly, security researchers are often reporting bugs without following up on how the patch works and exploring related attacks.”
Brian Gorenc, senior director of vulnerability research and head of Trend Micro’s Zero Day Initiative, agreed that failed patches are too common, noting that it has become standard practice for researchers to scour for potentially overlooked exploit variants even after a fix is distributed.
“The old expression is ‘Patch Tuesday leads to Exploit Wednesday,’” said Gorenc. “This used to mean researchers creating n-day [already known] exploits based on patches. Now, it also means researchers finding zero-day variants of n-day vulnerabilities,” he said.
Of course, the developers themselves should be looking out for these variants too. And they do, but perhaps not as thoroughly as would be ideal. There are various factors behind why software vendors churn out incomplete or insufficient patches – and time is among the most prevalent.
“I don’t think it’s cutting corners as much as it’s about limiting scope in testing,” said Gorenc. “If you are doing variant testing in security patches – and you should be doing variant testing – your scope could grow so large that you end up delaying the security update beyond a reasonable release window. Conversely, if vendors do no variant investigation, they end up releasing point fixes that treat the symptoms but not the underlying problem.”
“There need to be a balance between a rapid response and a thorough response. That balance is often hard to find, and few vendors want to invest the resources to find it,” he added.
But balancing security needs with growing workloads and shrinking time windows is never easy, especially with record numbers of bug reports landing in developers’ in-boxes. “It’s understandable how a vendor can get overwhelmed,” said Gorenc.
“Developers today face immense pressure to deliver software at breakneck paces,” said James Brotsos, developer evangelist at Checkmarx. “The advent of COVID-19 has only increased this demand. As a result, developers might be inclined to seek quick fixes that allow them to close out tickets and mark code as secure, rather than doing a deeper dive into the nature of a given vulnerability.”
This all-to-common philosophy is flawed: “If developers operate with a mentality of ‘fix it and move on,’ they risk failing to address additional existing security issues in an application. Developers should understand that if attackers have identified a zero-day in the wild, they will use similar techniques with the source code as well,” Brotsos continued.
The six zero-days that Google Project Zero linked to previous exploits affected a smattering of products, many of them browsers: Apple’s Safari, Microsoft Internet Explorer, Microsoft Windows, Mozilla Firefox, Google Chrome/FreeType, and Google Chrome again.
This includes an exploit for CVE-2020-0674, a remote code execution vulnerability in the Internet Explorer JScript scripting engine regarding the way it handles objects in memory. According to Project Zero, this issue was actually related to three prior exploits involving very similar bugs (CVE-2018-8653, CVE-2019-1367 and CVE-2019-1429) from just the past two years. Google’s Threat Analysis Group attributed all of these attacks to the same malicious actor.
“For all four exploits, the attacker used the same vulnerability type and the same exact exploitation method. Fixing these vulnerabilities comprehensively the first time would have caused attackers to work harder or find new zero-days,” Stone wrote.
Brotsos said this bug was particularly troubling, noting that a “simple change of modifying the attack from an index to a reference enabled [one] to exploit the same vulnerability.”
“This is a possible indicator that the fix did not undergo proper review in the context of memory management manipulation. More thorough unit testing, provided training, and pattern recognition could have helped prevent this similar zero-day vulnerability” after the previous ones were discovered, Brotsos continued.
The IE zero-day was also one of three bugs that were not properly fixed the first time, essentially opening up a fifth potentially exploitable bug (CVE-2020-0968) and requiring another patch.
The other two incorrect patches that required a do-over were applied to an elevation of privilege vulnerability in theMicrosoft Windows kernel (CVE-2020-0986 and later CVE-2020-17008/CVE-2021-1648) and a type confusion/heap corruption flaw in Google Chrome (CVE-2019-13764 and later CVE-2020-6383) that appears to be a variant of not one but two previous bugs.
The three other exploited flaws that were listed in the report were a race condition in Firefox (CVE-2020-6820) that can trigger a use-after-free, endangering data confidentiality and integrity; a memory corruption issue in Safari (CVE-2020-27930) that can result in arbitrary code execution; and a heap corruption flaw in Chrome/Freetype.
SC Media reached out to Microsoft and the other software vendors for comment on the various exploits.
Project Zero’s Stone noted that the discovery of an exploit should represent a significant setback for an attacker, not just a temporary inconvenience.
“The goal is to force attackers to start from scratch each time we detect one of their exploit,” she said. “They’re forced to discover a whole new vulnerability, they have to invest the time in learning and analyzing a new attack surface, they must develop a brand new exploitation method. To do that, we need correct and comprehensive fixes.”
But comprehensive fixes require proper “investment, prioritization, and planning,” she continued, as well as “developing a patching process that balances both protecting users quickly and ensuring it is comprehensive, which can at times be in tension.”
Areas of investment that she identified as being particular important are staffing, incentive programs, process maturity, automation and testing, release cadence and partnerships. She also emphasized the need for closer collaboration with vendors on patches and mitigations before the patch is ever released – a move that can help reduce the costs of these investments.
As part of these investments, “vendors may need to bulk up their response and engineering staff until they find a level that’s manageable,” said Gorenc.
Additional experts had their own suggestions for remedies.
“We need to go deeper as part of a continuous improvement mindset well known to many DevSecOps practitioners,” said Altaz Valani, director of research at Security Compass. “It all comes down to moving fast while still remaining secure.”
Valani recommended several steps to achieve this, including appropriate guardrails. “If something is patched, for example, an additional control point could determine whether there are any other attack vectors based on this vulnerability.” He also suggested utilizing an automated platform that “provides impact analysis from patch related policies directly to threat models” and “creating a knowledge base that reduces the signal-to-noise ratio of providing prescriptive guidance around the operational activities to be performed.”
Brotsos similarly endorsed automation: “By implementing tools that embed security into CI/CD pipelines so that scans can be automatically triggered, developers can find and fix flaws without compromising speed and security,” he said.
Additionally, developers should work at improving the way the triage vulnerabilities, Brotsos continued. “Focusing on the exploit path for a vulnerability, instead of just looking at CVSS scores, will give them a better understanding of adjacent paths that might be leveraged, allowing them to discover and resolve them before they become zero-days.”
Original article source was posted here