
Infosec's inability to quantify risk - wglb
http://blog.erratasec.com/2015/07/infosecs-inability-to-quantify-risk.html#.VbOb3RNViko
======
SCHiM
This article is flawed.

It seems to revolve around the fact that 'people commuting to work are more
dangerous than 1 car stopping on the freeway'.

The article then proceeds to explain why this is so:

> 'No human is a perfect driver. Every time we get into our cars, instead of
> cycling or taking public transportation, we add risk to those around us.'

> 'We often see cars on the side of the road. Few accidents are caused by such
> cars. Sure, they add risk, but so do people abruptly changing lanes.'

The problem with this is that, obviously, the act of stopping the car in the
middle of traffic needs to be _added_ to the risk that the security
researchers involved have been generating the whole year round. Instead the
author seems to compare the security risk of this incident with the risk an
average driver generates in a whole year.

And even if it didn't the author's example is also a fallacy of relative
privation. The fact that other things are more risky than stopping a car on
the freeway, does not mean that it is not dangerous.

Pretty ironic that the author fails in his own analysis while talking about
'proper "risk analysis"'.

~~~
tptacek
I think this is an interesting and valid response but that you might be
missing his point. He's not saying that relative risk says the highway stunt
was ok; in fact, he's saying the opposite, if you read all the way to the end.

In that context, it is reasonable to argue that the added risk was minimal.

~~~
SCHiM
I did not miss his point, I even agree with his final paragraph. But all but
the last paragraph of the article is based on flawed comparisons.

------
djcapelis

      In hindsight, it's obvious to everyone that Valasek
      and Miller went too far. Renting a track for a few
      hours costs less than the plane ticket for the
      journalist to come out and visit them.
    

In a good world, this would be true. In the world we actually live in, we
should remember that a different group of researchers did this in 2010 and
2011[0] and as far as I know, not a single automobile was recalled from the
road over that. This time, over a million were.

Security researchers do irresponsible shit like this because too many
organizations don't fix bugs when they're "disclosed responsibly". And no one
cares about that. When an organization sits on a bug that was quietly
reported, no one tells them they're being irresponsible or endangering anyone.
It's only after someone does something "reckless" enough with a bug that
change happens, and then we blame the people who did some particular reckless
event, rather than the people who created a whole system of reckless neglect.

Don't get me wrong. I wish we lived in a world where security researchers
didn't have to disable cars on a public highway to make a god damn point. But
I don't blame them that we live in a world where that may well be the best
avenue available to them to get dangerous flaws fixed.

[0]
[http://www.autosec.org/publications.html](http://www.autosec.org/publications.html)

------
nly
This disturbs me. Basically I think he's saying we should accept flawed and
unproven technologies in the name of progress because "eh, any new risk is
small compared to the risks we already face anyways".

The difference between a software hack and bad driving is one of
_responsibility_ for the risks we face. When people are negligent, we humans
like to see people taking responsibility and facing justice. This is why the
Jeeps have been recalled: the manufacturer is taking responsibility. Sloppy
driving is also transient and hard to predict. It occurs when drivers are
stressed, fatigued or distracted. Software flaws are just there. They can be
triggered maliciously on-demand.

Debating whether self-driving cars are OK, because, hey, they will probably
save more lives than they will take gives me chills. What about the emotional
detachment this brings? Who will be responsible for these deaths?

I'm not exactly sure what I'm trying to put my finger on. It's not so much our
inability to calculate probabilities and run statistical models. This is about
something more sociological than that.

~~~
pyre
> Who will be responsible for these deaths?

There is not always a responsible party. What about the child that runs out
into the road, and you have no chance to avoid them? Will you decry self-
driving cars when this incident happens to them?

Searching for "who is responsible" when the car is self-driving is self-
defeating. When the car is driven by an algorithm, the algorithm can be
improved. One death could prevent many more. In the case of human drivers,
some people refuse to learn from their mistakes, but self-driving cars don't
have this issue, and the effects of that learning are much broader than a
single driver once the 'lesson' has been learned.

~~~
icebraining
Curiously, back when cars first appeared, people _did_ decry them for hitting
children (and adults) even if they ran into the road. It was only after a wide
campaign (which also created the term "Jaywalk") that car proponents got the
public to reverse their opinion. From an episode of the great podcast 99%
Invisible:

 _Much of the public viewed the car as a death machine. One newspaper cartoon
even compared the car to Moloch, the god to whom the Ammonites supposedly
sacrificed their children._

 _Pedestrian deaths were considered public tragedies. Cities held parades and
built monuments in memory of children who had been struck and killed by cars.
Mothers of children killed in the streets were given a special white star to
honor their loss._

[http://99percentinvisible.org/episode/episode-76-the-
modern-...](http://99percentinvisible.org/episode/episode-76-the-modern-
moloch/)

~~~
jessaustin
It happened many years ago, but I'll never forget the time I was coming home
from work, buzzing west along Foothill Blvd in the darkening evening, and I
turned north toward my neighborhood. A small child jumped out from behind
parked cars, directly into my path. I laid my bicycle down, the pedal and
frame spraying gravel as I stepped off. She looked at me, not scared so much
as lost in thought, and wandered on across the street. I smiled and picked my
bike up.

It is no wonder that cars kill children, and always have. That child would
have died if I had been driving.

------
Animats
"Quantifying risk" statistically is meaningful when the risk is triggered by
random events. __It 's not meaningful against an intelligent enemy. __

In the early days of commercial security, the enemy was usually lone
"hackers". Today, it's organized crime and nation-states. Consider "spear-
phishing", aimed at people in key positions, which appears to be how someone
(China?) got the entire background check records of most Federal employees.
That was statistically unlikely, but as an attack, was worth it.

Too much of computer security is based on defense against large numbers of
nuisance attacks. Military thinking on defense starts from "what's the worst
they can do to us". It's about capability, not intent. Military organizations
have to be aware that kids throwing rocks at the perimeter fence is not the
real security threat; it's somebody seemingly authorized getting inside and
getting to the good stuff.

~~~
Spooky23
You are 100% correct. One of the best explanations of this that I've heard
recently was on this interview with Brian Snow on the security weekly podcast:
[http://wiki.securityweekly.com/wiki/index.php/Episode332](http://wiki.securityweekly.com/wiki/index.php/Episode332)

I think the discussion was in the second half of the podcast.

I deal with infosec people all of the time, and the devotion to risk models
(with questionable calculations of risks) is usually annoying, and in many
cases whitewashing real issues.

------
placeybordeaux
Damn, I was hoping that this would be an article about a general lack of
Infosec's ability to quantify risk, not a diatribe on a recent event.

------
sago
This is something I'm struggling with as I attempt to educate myself more
about security. I was a bit disappointed he went on to talk about the risk of
a _stunt_ , because there is virtually no quantifications of risks in anything
I've read on the _results_ of security research.

This seems to be borne out by the arguments about, say, the speed of
disclosure vs patching. There is no agreement, and seemingly no desire to
quantify the risk and danger of different types of flaw, different strategies
for patching them, different disclosure schedules, etc. So, from my
layperson's view, it seems like a wild west where the person who shouts the
loudest, or acts the most obnoxiously, wins the argument. Cheered on by others
who claim that such behavior is optimal overall. As an engineering discipline,
it is rather mystifying, imho.

~~~
sprkyco
"because there is virtually no quantifications of risks in anything I've read
on the results of security research"

Nearly all results of security research are given at least some metric for
risk quantification. [https://cve.mitre.org](https://cve.mitre.org) is a
single example of an attempt to quantify risk. I'm assuming at least at some
point you have run across these numbers so the statement is patently false or
a complete exaggeration.

~~~
sago
> so the statement is patently false or a complete exaggeration.

Oh, don't misunderstand me, it could very well be either. I'm not claiming to
have discovered this based on extensive knowledge.

Thanks for the link. So the CVSS metric is the one you're referring to? I've
not seen that mentioned in vulnerability reports, no. Once again, more than
happy to admit this is my failure or lack of diligence to notice. But when
I've asked before about quantification, I've typically only got variants of
'you must take all vulnerabilities totally seriously, because the bad guys are
powerful and evil.'

~~~
mike_hearn
> But when I've asked before about quantification, I've typically only got
> variants of 'you must take all vulnerabilities totally seriously, because
> the bad guys are powerful and evil.'

The CVE system is something used (mostly) by professionals who deal with the
security/usability/performance/cost/etc tradeoff every day. It makes sense
that they do quantify risk. You see this all the time when MS/Google/Apple etc
decide whether to patch an issue or not.

Random security "experts" on internet forums are not like that. Many are
amateurs with an interest in the topic but they don't work on any major
products and so have never had to be faced directly with those other costs. So
of course they assume that security is the be all and end all, and nothing
else is more important. But you get that in every walk of life. Ditto with
cryptographers and privacy.

------
clwg
I was flying with a friend one time and he jokingly said "never hack the plane
your flying on", while it was just joke, it's something I'd never actually
considered although it's blatantly obvious. Sometimes people get caught up in
their own curiosity, don't think through their plan and loose perspective on
the consequences of their actions.

While what they did was reckless and irresponsible, I think their overall
intentions were to inform of the risk associated with vulnerabilities/hacking
of this nature and raise awareness. I would assume that they have learned from
this, and won't be shutting down cars on a highway to show off anymore.

The thing about risk is that it's best dealt with when there is a clear
understanding of it, the conversation on what is appropriate testing, and
raising the general awareness of the public regarding these vulnerabilities
are both factors that will hopefully mitigate risks moving forward.

------
perlgeek
> In hindsight, it's obvious to everyone that Valasek and Miller went too far.

Not at all; they didn't create new risks, they just exposed existing risks.
And the security community's reaction isn't their fault, either.

And sometimes an industry needs a wakeup call to take a topic serious.

~~~
icebraining
If doing something risky doesn't increase the risk of bad consequences
happening, what does? Whether they "created new risks" seems beyond the point
of the criticism - this isn't about exposing the flaw, it's about reproducing
it in an unsafe way.

~~~
Qwertious
The thing is that on one hand it increased the risk of bad consequences to
them, but on the other hand it indirectly reduced the risk of bad consequences
happening to others.

Increasing the _net effect_ on consequences is what matters.

------
sprkyco
An inability to quantify risk is attributing a skill set to an industry which
is probably not responsible for quantifying risk. Infosec researchers should
only be beholden to identifying and detailing risks. There are infosec subsets
that require better skills to identify risks, but those subsets are the one's
more responsible for quantifying risk in an appropriate manner not the ENTIRE
industry.

Quantifying risk is a very difficult endeavor to do properly so any measure of
it is done from the security researchers biased perspective. Yes infosec
researchers seem to want attention for their work this is often because there
is not near enough attention paid to their work. Actuarial science is an
entire professional field specifically tasked with quantifying risk, at least
from an insurance perspective.

The inferred statement, at least in the title, is that not only do infosec
researchers now have to stay up on crypto, assembly, js[buzzword] framework
and on and on, but now they must also become actuarial scientists. As a
neophyte in the industry I'm having trouble getting caught up to a static
point let alone that the static point is a moving and rapidly accelerating
target and not in my favor. Adding to this unachievable standard I now must
study and become at least somewhat proficient in actuarial sciences is
maddening!

~~~
blincoln
I completely agree. To make things worse, I'm not even sure that it's
practical to use an actuarial approach in a way that will produce remotely
valid results.

Actuaries based calculations of probability on past occurrences. This is why
car insurance is calculated using a huge variety of variables like age of the
driver, make/model of the car, etc.

So first of all, where is the data about past compromises going to come from?
Just about no organization is willing to share it unless there is a legal
requirement to do so. Unless something dramatic changes, that means only the
most high-profile events from other organizations will become part of the pool
of data to work with.

More importantly, however, IMO there are far too many variables in order to do
anything meaningful with that data. When the exploitability of a particular
vulnerability can depend on everything from the specific type of hardware the
OS hosting the app is hosted on, to the choice of database back-end, so a
single configuration setting in an XML or other type of file, how is that even
trackable, let alone something that can be calculated with any sort of
accuracy?

------
kbenson
What apologetics for this demo seem to gloss over, or worse, not realize is
the real problem, is not just the increased risk. It's the increased risk
_and_ removal of choice from all the unwitting participants.

> In college, I owned a poorly maintained VW bug that would occasionally lose
> power on the freeway, such as from an electrical connection falling off from
> vibration. I caused more risk by not maintaining my car than these security
> researchers did.

If while in college and driving this vehicle you decided to film yourself,
made it clear your were pretty certain the car was going to lose power, got
onto a freeway, and when the car lost power said something to the effect of
"oh shit, this is dangerous", then yes, you would be facing a lot of criticism
right now too.

If you were a researcher and had a few more years than college age under your
belt, and if the point was also to raise awareness about something dangerous,
you should expect criticism doubly so, because more is expected from you than
some yokel on youtube trying to make a funny video.

Driving a car is an inherently dangerous activity. We've become complacent
about it, but there's still many deaths each year (~1.4 of each 100 deaths was
car related in 2013)[1]. For most people, it's the most dangerous thing
they'll do each day. Driving requires a level of cognitive dissonance about
how likely other drivers are to be paying attention, and driving with good
intentions. When this is broken, though obviously dangerous driving on the
road (such as the speeder doing dangerous lane changes) or introducing
unnecessary risks, our reaction is to punish. Police stop dangerous drivers
when they see them. These researchers broadcasted equivalent behavior, they
are getting an equivalent response (albeit from the public).

1:
[http://www.cdc.gov/nchs/data/nvsr/nvsr64/nvsr64_02.pdf](http://www.cdc.gov/nchs/data/nvsr/nvsr64/nvsr64_02.pdf)

~~~
maxerickson
It isn't that apologetic:

 _In hindsight, it 's obvious to everyone that Valasek and Miller went too
far._

It's not encouraging them to double down and do more live traffic tests, it's
encouraging other people to calibrate their reactions a little bit, so as to
not lose sight of the very real benefit that came out of the research.

~~~
kbenson
I took it as half apologetic, but really, for that I was addressing what I saw
in general from arguments from apologetics on this issue, not specifically
this reaction.

> It's not encouraging them to double down and do more live traffic tests,
> it's encouraging other people to calibrate their reactions a little bit, so
> as to not lose sight of the very real benefit that came out of the research.

Sure, I think it's extremely important research. I think it's great they did
it. But I think the outcry is important too, so as to not lose sight of the
very real benefit that comes out of keeping public safety in mind when
performing experiments. It sucks that these guys have to be a poster child for
this, when a slap on the wrist would suffice, but I think it's important that
_both_ sides of the issue aren't discounted because of the other side.

------
jdsnape
I'm not sure I agree - maybe I'm lucky to work in an organisation where we
have a good understanding that infsec is all about risk. We build this into
pretty much everything we do, and the only way to get stuff done is by
demonstrating some reduction in risk.

Where we do have a problem, and it's an industry-wide problem, is that there
is no real widely applicable methodology for evaluating and quantifying
security risk. There are lots of risk assessment methodologies, but all of the
ones we've tried still leave you making subjective decisions and best-guesses.

~~~
DCKing
Indeed. The suggestion that risk quantification is absent from information
security is completely alien to me. I work in information security and I work
on risk quantification almost every day.

It is true that there is no objectively agreed on method for risk
quantification. But that's somewhat of a red herring. There doesn't have to be
an objectively agreed on method for risk quantification to be useful.

------
PowerPete
Not a fan of this post.

Yes, Charlie Miller and Chris Valasek's stunt had _relatively_ low risk, but
the difference is that they were risking the lives of innocent people who had
nothing to do with it. You don't get to "quantify risk" for others who didn't
ask for it, who are completely unassociated with you and your activities.

    
    
      "Business leaders quantify and prioritize risk, but we don't, so our useless advice is ignored."
    

I do agree that progress needs to be made on this front.

~~~
roflc0ptic
> "You don't get to "quantify risk" for others who didn't ask for it, who are
> completely unassociated with you and your activities."

To take an example from the article: when you change lanes in a car, you're
incurring risk for others "who didn't ask for it, and are completely
unassociated with and your activities." I agree that it's generally
unacceptable to incur risk to others' lives, but in actual fact we do it
often.

Not totally related to your point: on the whole people seem to ignore existing
systematic risk and privilege preventing non-systematic risk. Interactions
with cars kill so many people! The unmitigated disaster that is climate change
- a product of using cars - is in all probability going to kill so many more
people! But here we are, having a discussion moralizing about the way some
security researchers did or did not misuse their car (fwiw, I'm in the
"misused" camp).

The problem here might be irresponsible use of cars by two bad actors, but it
might be larger?

~~~
PowerPete

      when you change lanes in a car, you're incurring risk for others
    

The difference is there's a reasonable expectation of danger from other
drivers every time you're on the road. OTOH, you have no reasonable
expectation of danger coming from random security researchers.

    
    
      people seem to ignore existing systematic risk and privilege preventing non-systematic risk
    

Agree 100%, I think we agree on a lot here.

~~~
enraged_camel
>>The difference is there's a reasonable expectation of danger from other
drivers every time you're on the road. OTOH, you have no reasonable
expectation of danger coming from random security researchers.

Not sure what you mean. If you're driving on the freeway and the car in front
of you loses power, why does it matter whether it lost power due to lack of
maintenance or because it got hacked by security researchers? The impact for
you, and your solution, will be the same: momentary confusion followed by
corrective action.

------
zobzu
I dont wanna secure shit and i like clicks.

Risk is about data. It doesn't matter if its security related or not. No data,
no risk quantification.

You do not have data - so you blog post is actually useless.

------
williamcotton
How the hell can we quantify the risk of data? Where is the price discovery
mechanism in a world where "information must be free"? You can only buy
insurance if you know not only how likely it is that you'll lose the thing
you're insuring but also how much it costs!

Maybe we need to update whatever concept of intellectual property we developed
in the 90s that got us to the current state of affairs...

------
bediger4000
Note that before software development suffered from Infosec's inability to
quantify risk, it suffered (and still suffers) from lawyer's inability to
quantify risk. If you want a black-and-white requirement, ask your legal
department. After some length of time, they'll say "yes" or "no". Nothing in
between.

It's the same phenomenon.

------
logicallee
Infosec is easy to quantify: every security hole should be treated as though
the lives of every human on Earth depended on it; also, 100% of software has
security holes. If you take the logical conclusion of these two statements,
it's obvious that the only winning move is not to play.

~~~
icebraining
_every security hole should be treated as though the lives of every human on
Earth depended on it_

What possible reason would lead you to do that?

~~~
pyre
Maybe the personalities of some security researchers? /s

~~~
logicallee
yes, this is correct and what I alluded to.

------
walterbell
Verizon has a 70-page report on data breaches as of early 2015, which is a
necessary step towards the quantification of security risk by insurance
companies:
[http://www.verizonenterprise.com/DBIR/2015](http://www.verizonenterprise.com/DBIR/2015)
. As explained in "Cyber-Insurance: Triumph of the Accountants",
[https://securityledger.com/2015/05/cyber-insurance-
triumph-o...](https://securityledger.com/2015/05/cyber-insurance-triumph-of-
the-accountants/)

 _" What will this mean for companies? At the behest of insurers, they will
need to clean up their acts. Just as drivers must show proficiency behind the
wheel over long periods of time, companies will need show progress along the
curve of “cyber maturity” to gain access to the lowest premiums and the
smallest deductibles."_

------
based2
[https://en.wikipedia.org/wiki/EBIOS](https://en.wikipedia.org/wiki/EBIOS)

------
jerf
The problem with asking for better risk management for infosec is that "better
risk management" pretty much comes out the same. In security terms, systems
are secure when the cost of breaking the protection exceeds the value of what
is being protected. In the software world, in the _vast_ majority of cases,
once a vulnerability is known, the cost of breaking the protection is
indistinguishable from zero. The result is that it _superficially appears_
that we treat things as binary risk, but in fact we are being perfectly
rational.

Some of the rare examples of software security bugs where the attacks are
_practical_ but not _free_ : Breaking RC4. BEAST. DDoS. The rare arbitrary
code executions that require significant time to attempt due to needing to
spray the heap _just_ right.

But most of the time, it's click -> own.

The other thing that prevents useful quantification is that we don't have a
Gaussian distribution to be seen. Rate of discovery, maybe, but that's it. We
do not have Gaussian distribution on size of protected items, density of
vulnerabilities, or anything else. Without that most of our good risk
estimation techniques don't produce numbers that mean much to managers.

This is, IMHO, the fundamental reason why infosec doesn't get listened to. Our
management structures are still highly Gaussian-biased (decades of
manufacturing envy). They can't even wrap their head around software
engineering schedules and costs after several decades, and infosec risks have
_even worse_ distributions! It is, in my very strong and considered opinion,
_perfectly rational_ for infosec to be vigorously ringing the bell. Consider
exactly the bug we're talking about now... it's a vulnerability that has the
potential to _destroy the auto industry_ , were it properly exploited. (It
would not just by Chrysler caught in the crossfire! And my use of present
tense is considered. I have little reason to believe this is fixed, nor that
it is the only such bug.) Our risk management procedures can not even properly
express the idea that an engineer drawing a line between two nodes in the
car's communications bus could have that result, can not properly express the
Black Swan nature of that decision. But that's not infosec's fault, or at
least, not infosec's fault alone.

And it's certainly not a solution for infosec to get more realistic about
risk, because "more realistic about risk" probably means we ought to be a good
factor of magnitude or so _more_ strident, not less! The more I learn about
the risks in this world the more horrifying it gets, and it's getting _worse_
, not better, as things get more interconnected. Again, look at this car
bug... it's something that could not possibly have existed 20 years ago. None
of us, from CEO to engineer to QA, have even begun to properly process what
this means, let alone react to it! And we won't until the Black Swan bites
_and probably not even then!_

------
FredNatural
"Risk is binary, either there is risk or there isn't."

What tripe.

~~~
rpedela
Taken way out of context.

~~~
omginternets
I was going to say "taken in context, then inverted".

------
philsnow
Author equates the risk of one person operating a single vehicle that
sometimes loses power to the risk incurred when a whole fleet of cars could be
subverted by somebody buying a few hundred dollars worth of hardware and
tinkering for a bit.

~~~
icebraining
No, OP is equating it to the trial the researchers did with a single vehicle.

~~~
philsnow
doh, you're right:

> In college, I owned a poorly maintained VW bug that would occasionally lose
> power on the freeway, such as from an electrical connection falling off from
> vibration. I caused more risk by not maintaining my car than these security
> researchers did.

that's what I get for reading and replying on mobile.

