This was my first time attending RSA, and on top of that I am fairly new to the Security industry. If RSA were a Senate race, I would be Ashley Judd. I am not, however, new to statistics. The following is an outsider’s perspective on Metricon, one without any preconceptions of the space. Spoiler: to be more secure as an industry, we need to share information. Below I’ll give you the why and just a little bit of the how.
Davids & Goliaths
From the numerous discussions I’ve had in the halls of RSA and at my Metricon table, the question of security team maturity has been the underlying force, shaping most of the discussions. Vendors decide which customers to go after based on team size and maturity, some choosing not to engage the small ones, some choosing to cater specifically to the little guys. Almost every work group at Metricon included at least one metric to measure the maturity of a security program, such as “% assets scanned,” “development lifecycle time,” “mean time to mitigate” or “time to discovery”.
In his lightning talk about setting up new security teams at Metricon, Dr. Chuvakin advocated for “one metric per broad domain,” these domains being identity management, vulnerability management, etc. Identifying a quick out-of-the-box way to set up a new security program is a terrific idea—it correctly highlights that the Security space is not just about improving existing programs, but equally as important—proliferating secure practices in “low” places. Metrics like “patching speed” and “% assets scanned” is a one or two man job for a smaller environment.
Pair maturity metrics with Dr. Chuvakin’s talk about the “minimum set of metrics” to get the ball rolling, and I’m faced with the age old question:“Does size really matter?”
My contention is that while the complexity or efficiency of a security program might scale with size, the way in which we measure effectiveness should not. In fact, in my perfect world riddled with unicorn lairs and information sharing, I could tell you how your security team stacks up against someone else’s: 38% from beyond the arc, 82% free throw, decent ticket sales, not too many lawsuits. However, I’m not implying that all security teams are created equal. Let’s give credit where credit is due – size helps.
If you have perfect knowledge of your assets, the security team’s job is easier. Having good estimates of the dev time it takes to remediate certain issues on certain assets also simplifies the decision process. Business operations and reporting up should factor into security decisions. This implies something at the heart of many of the metrics I overheard at Metricon—scaling operational efficiency eases the strain on the security team, makes remediation faster and cheaper, and expands coverage. However, one can easily imagine an infosec team paired with a great operations team that’s not very good at choosing what to remediate.
“There’s nothing in life that you can’t improve by throwing money at it” – Congressman Dr. William Foster
Kaiser Permanente’s and Hubbard Street Research’s The Probability of Exploit talk at this year’s RSA is perhaps the most complex metrics practice I’ve seen out there. It involves training analysts to make bias-free decisions, running Monte Carlo simulations, machine learning from the outcomes of said simulations … you get the picture. Attempting to estimate the expected returns of remediation is a multi-analyst, multi-data scientist, dev-time intensive process. I have no doubt that this complexity pays off – the remediation decisions become prioritized by dollar amounts, easy to report up to management, etc. Getting back to my theme here—a bad machine learning coefficient here or there—and such a system would never know that it’s remediating the wrong vulnerabilities.
Our own Ed Bellis often talks about hitting above the Security Mendoza line. This is the minimum amount of defense one needs to protect against the threats most likely to occur based on the evidence and ease of exploit. Since the name “Mendoza” line might be a little misleading: it’s not defined by Mario Mendoza himself but rather by the league’s distribution of players is such that anyone below .200 lifetime is unacceptable. In the same way, regardless of size, a security team is playing against a set of attackers or attacker skill levels. As the team matures, it can ward off bigger and badder threats, defend against more nuanced attacks, detect them earlier. But the reason the Mendoza line is an accurate description of what Infosec teams should aim to achieve is that it is an external metric.
Benchmark against others, not against yourself.
There is a set of necessary but insufficient metrics that any team must incorporate into their day to day to have a chance of being successful. These are the metrics that measure up against something external. The difference here is that you can’t cut your security team – instead you must seek to improve it in order to reach (and surpass) the Mendoza line. In this sense, the Mendoza line is both a measure of minimal safety and of program maturity. The harder question is how does one define said line?
One way is using a free, new feature in RiskDB. “Trending” vulnerabilities are those which are pervasive in our dataset, and hence a good estimator for targets of opportunity. The FAQ will explain how we calculate it, and if you have further questions or suggestions feel free to reach out to me personally.
This way, while a security team might evolve, grow, expand its’ scope, the way in which it benchmarks itself doesn’t change. The theme here is information sharing. We need to find good ways to do so such that security teams can start remediating the truly important issues and benchmarking against external threats, not against their own methodologies.