Friday, February 14, 2014

What analysis do we really need to guide vulnerability management?

This is the first of a series of posts on the topic of doing quantitative risk analysis in the face of intelligent and adaptive adversaries.  Later posts will dig into research topics like combining risk analysis with game theory, but this first post is mostly a reaction to what other people have said recently.

Rafał Łoś recently posted an article, and then followed with a guest post from Heath Nieddu, with this general theme (paraphrasing and condensing):
It's senseless and distracting to attempt to use quantitative risk analysis to make decisions about vulnerability remediation, and even for information security as a whole.  Uncertainties about the future are too great; adversaries too agile and intelligent; and the whole quant risk endeavor is too complicated.  Keep it simple and stick with what you know for sure, especially the basics.
In this post I'm going to address some of the issues and questions this skeptical view raises, but I won't attempt a point-by-point counter argument.  For the record, there are many points I disagree with, plus many ideas that I think are confused or just mis-stated.  But I think the discussion will be best served by keeping focused on the main issues.

I'm also appearing on Rafał's podcast, Down the Rabbit Hole, along with some other SIRA members.  I'll let you know when it is posted for listening.

Do Risk Analysis That Will Payoff in Better Decisions


Jeremiah Grossman of White Hat got the whole discussion started with this tweet thread, which started:
AppSec needs a business risk algorithm that helps guide precisely when a vulnerability should be fixed: now or later?
Before I get into the heart of the debate, I want to offer a viewpoint on this statement, and raise the question: what analysis do we really need? 

Very often people in information security focus exclusively on triage decisions regarding the (current) open vulnerabilities list in their organization.  Yes, this may be the "business problem" that they need to solve, but it's really not the business problem the organization, as a whole needs, to solve.

First and foremost, businesses need to decide how much to invest on vulnerability management as compared to all the other things they might spend money and time on in information security.  Also, the business as a whole needs to solve the "problem" of what results to expect from vulnerability management and remediation, and how those results fit in with the rest of information security.

This approach raises all sorts of fruitful questions, such as: how do our IT and security architecture and purchase decisions create the conditions that give rise to these vulnerabilities in the first place?  What if we invested in technologies or solutions that rendered some vulnerabilities irrelevant (e.g. “moving target” defenses)?  How much do we spend on security overall, and do we allocate it the optimal way?

This approach is part of what I call Big ‘R’ risk management, and I’ve proposed a framework for making these sorts of decisions.

Getting back to the debate that Jonathan started and Rafał commented on, the top-down Big ‘R’ risk management approach should provide clear but general guidance on what’s important in vulnerability management and what results are expected.  From there, it should be possible to use various inference methods to decide which classes of vulnerabilities should be remediated first and soonest, and the reverse.  But – this is important – don’t expect quantitative risk analysis to place a “dollar value” (via ALE or any other form) on each and every vulnerability.  (This is the Little ‘r’ risk approach that I’ve criticized at length elsewhere.)  We don’t need analysis in that form to make good vulnerability management decisions, given that the business has made broad decisions about investment and results in this area.  To make decisions about individual vulnerabilities, it’s perfectly sensible to draw on a variety of evidence, inference methods, and decision criteria.  Just one example of the latter: what vulnerability remediations could we learn the most from?

Can We Just Leave Out Adversaries in the Analysis?

At the end of his post, Rafał advocated a simple approach.  He said that any quantitative analysis should only include what we know, such as asset value and cost-to-fix for vulnerabilities.  For a dozen reasons, he advocates leaving out any probabilistic factors related to threat actors.

Here’s my answer: No, you can’t leave them out, but you should do probabilistic analysis of threat agents at a higher level. Threat intelligence is part of the Big ‘R’ approach I advocate. There’s just no way to develop a comprehensive information security program by ignoring what threat agents and actions are more likely than others, and how they might trigger loss events. But this does not mean that you estimate the full cross product of threats X vulnerabilities to calculate something like “likelihood to exploit”.  Instead, probabilistic analysis of threat agents should support estimation of the organization’s overall Cost of Risk. This all supports the broad business decision-making I mentioned above, including identifying what classes of vulnerabilities and loss events that you should care most about and how must effort, time, and money to put into them. In essence, these broad decisions will define the rules and principles for triaging vulnerabilities.

Once you get down to decisions about individual vulnerabilities, many factors need to be considered.  Yes, it matters a lot whether exploits are in the wild and how frequently a given vulnerability is exploited.  Vendors like Risk I/O are aggregating this data across many organizations and they providing ever-more detailed and actionable assessments based on it.  But Risk I/O won’t triage or prioritize your vulnerability list, nor will CVSS scores or any other source.  You have to do that, based on your own criteria and decision rules.  But what’s emerging now is a data-driven approach that will, I believe, deliver much better business results for the money than any “simple” approach, e.g. “just focus on the basics” or “just balance asset value with cost-to-fix”.


---

In the next post, I hope to write about game theory and other methods of analysis and estimation that might help us analyze risk when facing intelligent adversaries.

2 comments:

  1. Russell -
    I'll readily admit that I'm not a seasoned 'risk scientist' and reading your response I think I was a bit over-zealous on the notion that we can simply discount or exclude the adversaries in these technology decisions. What I do (still) believe is that at the micro level these types of decisions can get too complicated and fail "in the details" - but the value of including adversary *classes* (I think someone else said this too before me) is important on a macro level.

    Great post, thanks for replying with your nuggets of wisdom and joining the podcast discussion! I'll link back when we post, on Monday hopefully.

    /Raf

    ReplyDelete
    Replies
    1. Thanks, Raf, both for this comment and for facilitating the debate on your blog and podcast (Twitter, too).

      The esteemed Alex Hutton dubbed Information Risk Analysis a "proto-science", meaning we as a community haven't worked out the foundations and core principles yet. (Example from history: look at medical science in early-mid 1800s and the conflicts between the physicians, surgeons, and homeopaths.) As typical in a proto-science, there are going to be all sorts of confusions, "dead end" paths, and even "reinventing the wheel" sometimes. But I see this messiness as good, because out of it will evolve a more solid science and practice of quantitative Information Risk, if such a thing is possible -- and I believe it is!

      Delete