Equifax Lessons: Risk Hunting at Scale

Michael Roytman    September 15, 2017

This past week has seen another high profile breach in the news, one of the largest ever, and apparently the result of a known vulnerability.  Looking back at our analysis of the WannaCry attacks, we examined what we could  learn about prioritization from our 1 billion vulnerabilities under management.

Out of those billion vulnerabilities, 259,451,953 were CVSS score of 9 or 10, a quarter of all the vulnerabilities we’ve seen.  For any organization, that’s entirely unmanageable – even for the over 300+ enterprises that this aggregate number represents collectively. Compare that to using the Kenna Risk Meter Score, where 9,675,000 / 1,000,000,000+ vulnerabilities in our platform have a score of 100. That’s less than 1% prioritized.

This latest breach, as well as the growth of our platform to 1.4 billion vulnerabilities affords us an opportunity to revisit the issue. The vulnerability implicated in the Equifax breach, CVE-2017-5638, exists in the Jakarta Multipart parser within Apache Struts 2 2.3.x before 2.3.32 and 2.5.x before 2.5.10.1. The parser mishandles file upload, which allows remote attackers to execute arbitrary commands via a #cmd= string in a crafted Content-Type HTTP header. A Metasploit module was released for this vulnerability on March 15, only 6 days after the vulnerability was published. For years now, our data has indicated that a vulnerability having a weaponized, public exploit available is one of the single biggest factors in predicting successful exploitations.  This vulnerability proved no different.

This 6 day gap is where security can move from the “my past predicts your future” definition of prediction so prevalent and reviled by the analytics literati  (see Dr. Anton Chuvakin’s (@anton_chuvakin) “Sad Hilarity of Predictive Analytics in Security?) to a true predictive model, where the troves of data often available about newly released vulnerabilities informs us about the associated exploits even before they are written or released.

This particular vulnerability’s metasploit was predictable. While our binary classification model parses over 50 different attributes, some common sense ones stand out:

  • the target of opportunity that Apache Struts presents,
  • the breadth of affected operating systems,
  • the high impact of the vulnerability,
  • the remote code execution nature of it, etc.

Note that these are also decision factors for the attackers to determine whether the effort to create an exploit should be undertaken. In other words, with this information, our industry finally has the opportunity to build forecasts of attacks before they happen, rather than write postmortems after millions of records are lost.  It also gives vulnerability and risk analysts the ability to prioritize remediation by the evolving likelihood of an given attack.

Let’s see the data. Below you’ll see a post-mortem on the Struts Equifax breach. This is the volume and velocity (the number of attacks and how quickly those attacks are growing) of successful exploitations which utilized the CVE that we’ve seen.  It has the look of a steady cyclical campaign.

An interactive version is here

Compare this to the newest Struts vulnerability, and one can easily see that the lessons learned just a few months ago apply viscerally today:

Similar to CVE-2017-5638 above, CVE-2017-9805 is a vulnerability in Apache Struts and the  Struts REST plugin within XStream handler to handle XML payloads. Similarly if exploited it allows a remote unauthenticated attacker to run malicious code on the application server to either take over the machine or launch further attacks from it. Published on September 5th, 2017, and still going through the NVD assessment process, we are already seeing successful and similar “in the wild” exploitations on the vulnerability, and metasploit availability here:

For an interactive version of the chart above, go here

Imperva reported that a single Chinese IP is responsible for more than 40% of the attack attempts they registered. Shodan, reported that this IP is registered to a large Chinese e-commerce company that runs an openSSH server.  This may indicate a compromised machine. This machine also tried to attack dozens of sites with different automated tools impersonating legitimate browsers such as cURL, wget, and Python-requests indicating the persistency of the attacker(s). Unlike past vulnerabilities, most of these attempted attacks (~80%) refer to exploitation attempts and only 20% refer to reconnaissance attempts to track vulnerable servers. The automated nature of these attacks gives us some foresight of the upcoming volume spikes in successful exploitations leveraging this CVE.

In fact, these vulnerabilities anecdotally confirm what we at Kenna already know from analyzing over 1 billion vulnerabilities and 4 billion successful exploitation events; that there are factors much more useful in determining vulnerability risk than CVSS or scanner scores alone. To see this clearly, look at the pure volume of vulnerabilities defined as “high or “critical” by CVSS and the scanners, as compared to our context-rich definition of critical:

Let’s get back to why this is important. Patching is a hard and often resource intensive exercise. The answer to making patching easier is not more bodies or later nights, it’s selecting the most important patches through risk and/or threat-based prioritization.

At its release, just under half of our customers were affected by CVE-2017-5638; But the good news is that of our 1.4 billion vulnerabilities under management, only 2512 were CVE-2017-5638. So, while this vulnerability is risky (we have scored it 100/100 since days after being published), the hard work of mitigating this risk is the hard work of finding this needle in vulnerability data deluge. CVSS and severity alone can’t help, since just under 40% of all vulnerabilities are scored between 7 and 10. What is useful, is having a prioritization mechanism which mirrors the power law nature of vulnerability risk, and allows granular prioritization of vulnerabilities.

 

Leave a Reply

Your email address will not be published. Required fields are marked *