When I ran security at Orbitz, reporting on risk was always a challenge. My team wanted to ensure that we had a clear way to paint a picture of the organization’s exposure to risk—as well as describe the actions we had taken, month by month, in order to reduce that risk.
But frankly, we weren’t very good at it. We could only use the tools we had available to us, and those tools didn’t equip us with a sufficient ability to see, measure, and monitor our risk landscape. We could only do one thing really well: play the numbers game. In other words, we were really good at reporting on the sheer number of vulnerabilities we closed.
Fast forward several years and I can see that little has changed. When I talk to companies prior to their joining the Kenna Security platform, I find that reporting on vulns closed—rather than taking a more strategic view of risk—is still the primary modus operandi.
In many organizations, reporting on risk is actually all about volume: “We closed this many vulns last quarter, and last month, and this month.” Sometimes, the extra step has been taken of assigning CVSS scores to each vulnerability, with the hopes of demonstrating that the closed vulns represent a particular level of criticality.
Here’s a couple of examples from organizations that we’ve talked to here at Kenna. Obviously, we’ve taken great pains to file the serial numbers off these examples so the companies aren’t identifiable. But the reality is, these examples could come from any of several hundred thousand companies, because most organizations are reporting on risk in the same way.
Here’s another example:
Each of these examples shows a highly ordered, professional approach to vulnerability management. They group the assets appropriately, assign levels of risk to them, and sort them by orders of severity. Some of the metrics include scanner coverage, vulnerability density, and time-to-remediation metrics.
Here’s the problem: they’re completely useless for understanding the organization’s true exposure to risk. These spreadsheets represent an inventory of vulnerabilities, and indicate some level of prioritization assigned to them according to CVSS, but there’s no true context.
Any member of the board will scratch their head and ask, “Great, you closed 10,000 vulnerabilities last month. So…is that…good?”
Then they will proceed to ask
- What’s our true risk level now, and where was it before?
- Where is it in relation to where we need it to be?
- What’s our progress been over time—not in simply closing vulnerabilities, but in reducing our exposure to risk?
None of these questions are answered in these intricate and well-meaning spreadsheets. They demonstrate a mastery of vulnerability volumes, but not an ability to understand the organization’s actual risk posture. Here’s what the board really wants to know:
- What’s our likelihood of a breach?
- If we have a breach, what’s the impact?
- What assets are most exposed, and what’s the plan for reducing that exposure over time?
- What’s the cost for that reduction, and does it align with the current allocation of budget and resources?
Any board will want storytelling behind the data. They don’t just want a static number—but rather a narrative that aligns all the pieces of the organization’s security landscape and describes its overall progress towards reducing risk.
In the next post, I’ll talk about how we actually do that.