
'Black swans' and 'perfect storms' become lame excuses for bad risk management - spathak
http://news.stanford.edu/news/2012/november/black-swan-risk-111612.html
======
jbert
This is one thing to me which seems to me to be a genuine failing of an
efficient market. Over the medium term, ignoring the low-frequency, high-risk
event gives you a margin over your competition. Hence you succeed at their
expense and they fail/get bought out by you etc.

As a simple example (the general argument applies to all sectors and all forms
of risk), consider a bank ("safe bank") keeping $X in reserve to handle
unforeseen events. Another bank ("risky bank") which keeps only $X/2 in
reserve.

Risky bank will have a advantage over safe bank at all times except when an
event requiring between $X/2 and $X occurs. Should such a rare event occur,
risky bank would fail and safe bank survive.

Once the frequencies of such events drop to low enough levels (once ever 5?
10? 20? 40? years), there is no market pressure on the riskier bank to plan
for the problem (and in fact the opposite, the market will destroy safe bank).

The actual time period is I think determined by how long it takes risky bank
to out-compete safe bank and so drive it from the market as a significant
force.

~~~
jacques_chester
> _This is one thing to me which seems to me to be a genuine failing of an
> efficient market._

You seem to assume that all risk is meant to be carried by the banks. As you
note, such a thing is possible (up to the risk-bearing capacity of any given
bank), but such banks would be very expensive. Consequently, most customers
bank with riskier institutions and this places more of the risk back onto
them.

So, in actual fact, this is the market doing what it does pretty well: solving
a hyper-distributed problem with heterogenous agents with numerous complex,
incompatible preferences.

Sometimes we don't like the outcome. That doesn't mean that the market has
"failed"; it just means that _we don't like the outcome_.

~~~
ordinary
The market is a solution to a problem (or perhaps a set of problems). In the
example of in the grandparent's post, that solution does not adequately deal
with the problem at hand. This makes it a _bad solution_.

Even if we can't think of a better solution right now, we shouldn't stick our
collective heads in the sand when we notice errors in our current approach.
Highlighting errors is important, because even if we can't fix them right now,
we may be able to in the future. If we don't know about the flaws in our
current approach, it it impossible to even attempt to think of ways to improve
them.

~~~
jacques_chester
The market is an emergent phenomenon. It's not a designed institution. Hayek
talks about this misattribution as the root cause of a lot of
misunderstanding.

People do think of ways to adjust for things about the market that they don't
like. Those adjustments are generally imposed from outside by force of law and
quite a few of them are later modified or removed because they had seriously
unpleasant side-effects (such as the total disappearance of a market).

Being angry at an emergent phenomenon like markets is like being angry at the
weather, or upset about evolution. It's pointless.

~~~
ordinary
There are certainly aspects of our market economy that are emergent. Trade is
as old as humanity itself, but not all aspects of the economy are so old, nor
as immutable.

The modern corporation, for example, is a relatively new invention. Depending
on your definition, anywhere between a few centuries and a few decades old.
Compared to the time scale on which human evolution has taken place, that's
practically nothing. In those decades or centuries, we and our ancestors have
chosen particular shapes for our economy. Not all of those choices are final.

Whether the choices that influenced this particular example can be changed, I
don't know, nor do I feel qualified to hazard a guess. However, I stand by my
original point: it makes no sense to refuse to consider how we might improve
the system.

~~~
jacques_chester
Modern humans have never refused to consider how to improve everything.

But it is important to realise when we're licked. We can't solve the TSP in
linear time, we can't travel faster than light and it looks like -- in both
theory and practice -- markets are better at solving economic problems than
planned alternatives.

------
konstruktor
She obviously fails to grasp what Taleb calls a Black Swan, or ignores his
definition for some cheap publicity: "The attacks of 9/11 were not black
swans, she said. The FBI knew that questionable people were taking ﬂying
lessons on large aircraft."

They are the perfect example of a Black Swan. Quote Taleb (Black Swan,
p.xxii): "... in spite of its outlier status, human nature makes us concoct
explanations for its occurence _after_ the fact, making it explainable and
predictable", which is exectly what she is doing. And, one page later on 9/11:
"had the risk been easily _conceivable_ on September 10, it would not have
happened".

He is not advocating abandoning risk management, he is in favour of risk
management that doesn't need us to predict the future, as it is harder to
reliably estimate the likeliness of very unlikely events.

Concerning the second example of earthquake risk and nuclear power plants:
That, again, is a post-hoc rationalization. Everybody knew that earthquakes
were a risk factor for NPP and was planning accordingly. They were just not
planning for a Tsunami this size, as it was very, very unlikely. So including
higher error margins for earthquakes next time is nice but not enough. Taleb
on this: <http://www.valuewalk.com/2011/03/nassim-taleb-black-swans/>

Consider the aviation industry: After an accident, they find the root cause
and eliminate it. It is now something expected, and can be directly dealt
with. But also try to improve the system (e.g. via training) to be more robust
towards all the root causes the didn't anticipate.

~~~
jacques_chester
> _They are the perfect example of a Black Swan._

I disagree.

The black swan is something that is completely unforeseeable and for which
there are no previous partial or complete examples, either of the final
outcome or the contributing causes.

Which is her point: 9/11 was foreseeable from the information (the failure was
in connecting it) and the fact that a previous, similar plot had been tried in
France. The clues were there and there was a previous partial example.

~~~
clicks
> _The black swan is something that is completely unforeseeable and for which
> there are no previous partial or complete examples, either of the final
> outcome or the contributing causes._

There is no such thing as something _completely_ , entirely 100% unforeseen or
unforeseeable. Someone, somewhere in the world will be seeing it coming...
just as some intelligence analysts saw 9-11 coming, some economists saw the
financial crisis, etc. etc. What we're considering is what the prevailing and
strongly established consensus well outside the fringe areas says.

~~~
jacques_chester
This is where we start to wander into problems with classical set logic.

No, I'm not joking. Fuzzy sets are pretty much required for any meaningful
discussion of "failed", "foreseeable" etc, if they are to be at all useful
concepts: <http://chester.id.au/2012/04/09/review-drift-into-failure/>

Reduce foreseeability and failure to a binary toggle and you destroy enormous
amounts of information with high utility so that some syllogisms still work.
Wasteful.

Taleb is a very intelligent man, but AFAICT he does often reinvent existing
concepts with much cooler names. "Antifragile" sounds awesome. "Robust" sounds
boring.

Take "black swan", for example. Given the technology of the day, black swans
simply didn't exist. Iain Banks called these "Outside Context Problems", one
might also call them "paradigm-busters".

Anyhow. I should have padded out my original definition with the usual
legalese about "reasonably foreseeable".

~~~
konstruktor
"reasonably forseeable" is hard to define. Let us, for a moment, use being
able to implement countermeasures without seeming like a crazy lunatic as a
definition. As a politician in a pre-9/11 world, if you had done what would
have been necessary to prevent _a_ 9/11 (not go and arrest the very guys who
did it, that would have required perfect foresight), i.e. add bulletproof
doors to planes, implement the very invasive searches the TSA does etc., maybe
ground commercial aviation until those measures would have been in place, it
would be the end of your career. The remote possibility of a terrorist attack
has only become a justification for about everything after 9/11. Also, 9/11
was just one of many terrorist threats at that time, it's just in hindsight
that we consider this particular one inevitable. Edit: Spelling

~~~
vidarh
Hijackings had happened many times. Including with weapons. There was plenty
of knowledge that could have easily justified adding secure, lockable cockpit
doors. And that would have been a good, low impact way of mitigating against
known risks while also taking a whole host of unknown risks off the table or
substantially reducing them, including reducing the chance of 9/11 having the
impact it did or even being tried.

I think that is one of the changes that more focus on mitigating risk vs.
avoiding it might have produced. E.g. when faced with heightened risk of
terrorist attacks, a rational response would be to not focus so much on
identifying potential attackers and stopping a specific plot, but spending
more resources at looking into low impact ways to mitigate the consequences of
at various broad modes of attack actually getting underway.

The specifics of 9/11 was probably near impossible to predict, but another
eventual hijacking was a near certainty, and an eventual building collapse for
whatever reason should have been considered a near certainty, as many high-
rises have failed over the years too. The failure to reduce the potential
impact of those known risks are the real failures of 9/11, not the failure to
prevent the specific plot, as these were broad, known risks and the actual
_causes_ leading up to them would be largely irrelevant for a lot of
mitigating actions.

------
leashless
We call this phenomenon "black elephants."

You have an elephant in the room. After it explodes, everyone will say it was
a black swan.

~~~
batgaijin
Like the Chinese housing bubble?

------
mbesto
I've said it before and I'll say it again. If you want to understand more
about this (especially economically), Daniel Kahneman's _Thinking, Fast and
Slow_ spells it out really well:

[http://www.amazon.com/Thinking-Fast-Slow-Daniel-
Kahneman/dp/...](http://www.amazon.com/Thinking-Fast-Slow-Daniel-
Kahneman/dp/0374275637/ref=sr_1_1?ie=UTF8&qid=1355912148&sr=8-1&keywords=thinking+fast+and+slow)

~~~
anythinggoes
One major reason why I disagree with the article. Social phenomena are
impossible to fully compute as human beings do not just apply to
comprehensible laws of physics, but there is a whole different dynamic of
behavioral and social factors involved that makes statistic modelling a great
deal more complex. Unlike the author I also do not think that this problem can
just be resolved with technical means, there will always be uncertainty about
human behavior and thus Black Swans.

~~~
mbesto
Ya, this part irked me the most:

 _"Traditional financial analysis, she said, is based on evaluating existing
statistical data about past events. In her view, analysts can better
anticipate market failures – like the financial crisis that began in 2008 – by
recognizing precursors and warning signs, and factoring them into a systemic
probabilistic analysis."_

So, let's say you do provide a systemic probabilistic analysis about the
impending education crisis the US is about hit? Don't you think a government
would be gnawing their hands off to get that type of statistical analysis?
Personally, I don't think it systematically exists.

------
batgaijin
<http://fooledbyrandomness.com/ForeignAffairs.pdf> [pdf warning]

Is a really good geopolitical paper I recently read about black swan stuff.
The general gist, to my understanding, is that artificially supporting a
regime makes it weaker.

------
EzGraphs
Taleb's emphasis in his latest book is on avoiding the naive attempt to
predict the unpredictable. Instead, focus on identifying and making things
"Antifragile" so that they are resistant to (or even benefit from) events that
would otherwise be destructive. With that in mind, risk management becomes
much less about guessing about the future. Instead, the focus is to identify
the fragile (that which is highly susceptible to disruption) and take steps to
make it antifragile.

[http://www.amazon.com/Antifragile-Things-That-Gain-
Disorder/...](http://www.amazon.com/Antifragile-Things-That-Gain-
Disorder/dp/1400067820)

It is an enjoyable read so far...

