No More Traffic Signals

Ed Bellis    March 23, 2012

Red, Yellow, Ugh…

I have been frustrated by the state of prioritization in security for several years. I recently wrote about how a data-driven approach can help prioritize remediation when there are a large amount of issues to contend with. It seems that much of the industry got together years ago and decided we could drop millions of issues into buckets of red, yellow and green. Simple. Now all I have to do is start with all the red issues and fix those right away, after all they are RED! The problem with this approach when you dig into the issues is that prioritization can be complex and include a lot of different factors. Adding to the complexity, those factors are often different from organization to organization. I am all for breaking things down to their simplest parts but doing so by obfuscating the complex factors that go into this, NOT eliminating them.

Information Security Needs a Decisioning System

Lets start with a seemingly simple example from a world I know well.

What factors should we know about a defect or vulnerability to help guide how we prioritize remediation? Here are a few things directly off the top of my head…

  • Exploitability
    • How easy is it to exploit this vulnerability?
    • Are there publicly available exploits through Metasploit, Core, ExploitDB?
    • What is the access vector? Does it require a multi-vector attack? Is it behind authentication or an additional control thus reducing the attack surface?
  • Asset Affected
    • How do we value this particular asset?
    • What type of data is stored or processed by this asset?
    • Is the asset part of a larger system (see Business Process Affected)?
    • Are there specific confidentiality, impact or availability requirements tied to this asset?
    • Are there compliance requirements or additional SLA’s for this asset?
  • Network Location
    • Is the vulnerability/asset publicly available?
    • Does it sit within a DMZ or core network?
    • What additional assets or systems can a threat agent access from here (see Multi-Vector Attacks)
  • Business Processes Affected
    • Is the asset above part of an important business process?
    • Does exploiting this vulnerability interrupt or compromise this process? (CIA)
  • Number of Users Affected
    • How many users are affected by an exploit of this vulnerability?
    • Is the attack directly against users of the application?
  • Discoverability
    • How easy is it to discover this vulnerability? (through automated tools, specialized skills, etc)

There are likely several more including some unique to your company but lets use this quick list for brevity sake.

A Simple Example

So lets apply this criteria to a single example.

Vulnerability: Persistent XSS on public web site www.foo.com

  • Cross Site Scripting issues can vary quite a bit but we’ll call this one trivial to exploit. While there isn’t a publicly available exploit as this is a custom web application, it can be exploited with a small amount of skill and a browser.
  • www.foo.com is our public facing web site. It doesn’t process much data and mostly serves as informational. It’s important for our sales, marketing and public relations.
  • Our public site is available directly and not behind authentication. There are several systems within the DMZ that can be accessed from www.foo.com. Some of these systems include processors of “confidential” but not “sensitive” information.
  • The primary business processes associated with the site is public relations and marketing.
  • Our public site receives millions of unique visits per month and the exploit of this vulnerability can directly attack these visitors. The payloads can vary but assume the worst.
  • Discovering this cross site scripting issue is trivial and can be done through automated tools or manually via a web browser.

In this simple example we start to get a feel for how serious or not this vulnerability is. Just by running through this off the cuff list of decisioning factors I can see how this can result in an attack against a large amount of our users and the likelihood is fairly high based on the lack of skills needed to both discover and exploit this defect.

We Can Be More Quant Than This

Have I over simplified this? You bet. There are likely several other factors that drive prioritization here, including competing with other priorities (opportunity costs). I’ve also simplified this down to a qualitative decision but there’s no reason why this can’t be more quantitative. My point here is even a short, simple off the cuff list can bring a lot more relevant factors to my remediation priorities than red, yellow and green.

13 thoughts on “No More Traffic Signals

  1. Pingback: Big Data Information Security Maturity Scale – Where are you on this scale? « facebookjustice

  2. Pingback: Big Data Infosec – Bigsnarf Open Source Solution « BigSnarf blog

  3. Joshua

    The bulk of this is already covered by the Common Vulnerability Scoring System (CVSS). Why not start with that? If the model doesn’t cover enough factors, just add them as an extra scoring factor (like the Environmental ones). Many vendors are already including the base scores in their advisories, so you don’t have to start from scratch. The NIST National Vulnerability Database also has a CVSS calculator to get a feel for how well the model works for your use cases.

    http://www.first.org/cvss
    http://nvd.nist.gov/

  4. Ed Bellisebellis Post author

    @joshua a valid point. I’m in no way suggesting we just skip CVSS or specifically the factors that make up the environmental score. In fact, we rely on many of those factors in Risk I/O. But even including the base and temporal factors I would treat that as a beginning. More importantly, we need to understand those are variables that often need to be completed by individual orgs not a blanket CVSS base score for everything. I do agree with you that CVSS is a start, but I look at it as just that, a start.

  5. Joshua

    Ahah, I didn’t spend enough time looking at site. My apologies. I offer a new line of discussion to make up for that.

    I like the lure of more data, and more granular data. Presuming we have that nirvana, including your proposed extra bits of granularity, how else can we use that information? Are we close to having reasonably apples-to-apples scoring process within a specific business context?

    If so, does that mean we can start using the trending data to predict where our next vulnerability or issue will arise? After all, if server XYZ has a mean score of CVSSSv99 9.221 over an appropriate period of updates, is that server more likely to have a problem than the one a rack over with a 2.12 score?

    I think we edging in that direction, however, I don’t think we’ll be quite there for a while. Now, if we could plug in better data (like $VENDOR internal code flaw fix count) from outside our local scope, maybe…

  6. Ed Bellisebellis Post author

    @joshua I like where you’re going with the trending data. We’re obviously in the early stages of utilizing information we already have but I think a lot of what you talk about in your example is possible.

    Could we take your vulnerability prediction example a step further? A lot of different attributes could go into determining the “why”. Is a particular team less responsive to patching and updates? Is it the technology stack that is more prone to vulnerability or misconfiguration? Are there other environmental reasons?

    By determining root cause you may more accurately predict the next issue as well as risk rank new projects or applications prior to deployment. By combining vulnerability, misconfig, defect and issue data with operational data such as log and events, threat feeds, and breach data (need more of this), we could also take our predictive analytics to security breaches not just issues.

    Certainly more food for thought.

  7. Mike Lloyd

    Nice methodology, Ed!

    I have to agree with Joshua – much of this is already in CVSS, and as standards go, that one has FAR more traction and “ecosystem” around it than most. I emphasized CVSS as a basis for calculations exactly along these lines in my talk at BSidesSF – I see you posted about being there, but I don’t think we met at the event. (I love BSides – so much less corporate than RSA!)

    What CVSS lacks is the block you have here on “Network Location”. That’s easier described than practiced, in my experience. But then, I would say that, since my day job is helping figuring out exactly this piece ūüôā

    Anyway, for anyone interested in seeing practical examples of using Ed’s style of approach, here’s a recording of my BSides preso:

    http://www.brighttalk.com/webcast/7651/44301

    If you can’t abide the (free) registration, my slides are here:

    http://www.redsealnetworks.com/downloads/collatera‚Äčl/security-metrics-that-dont-suck.pptx

  8. Ed Bellisebellis Post author

    @Mike, thanks much. While I didn’t get a chance to catch your talk in-person I did watch it online. Great stuff. As mentioned I think CVSS serves as a solid start and there’s no reason to skip it or ignore it. I do think, however, there’s a ton of potential beyond CVSS. Some of that is being tackled by what you mention in your talk. In a world of managed service providers and SaaS providers, some of these factors can be scaled across multiple networks to give a better picture of the attackers and help with prioritization.

    Very good presentation and thanks for sharing it.

  9. Ed Bellisebellis Post author

    @Mike and @Joshua – I wanted to also point out a good post from Jeff Lowder on some shortcomings within CVSS where we might be able to make better prioritization decisions with additional data and context. I think it’s relevant to the discussion here.

  10. Pingback: Data Driven Security Presentation

  11. Pingback: Data Driven Security Presentation

  12. Pingback: The Top 10 Risk I/O Blog Posts of 2012 | The Risk I/O BlogThe Risk I/O Blog

Leave a Reply

Your email address will not be published. Required fields are marked *