
The Logic of Risk Taking - the-mitr
https://medium.com/incerto/the-logic-of-risk-taking-107bf41029d3
======
cortesoft
I feel like this idea of 'tail risk' is pretty well understood in the gambling
community, especially in the poker world of no-limit holdem tournaments.

For example, everyone knows that pocket Aces are the best possible starting
hand in poker, and the default strategy is to call any all-in call if you have
them; your expected value will always be positive, no matter how many other
people call or what they have.

However, there is a well known exception to this rule, and many poker books
talk about it - if you are the small stack at the final table, it might make
sense to fold your pocket aces pre-flop if more than one larger stack has
already called.

The reasoning is that even if you win the hand, you aren't certain to move up
in finishing position, which is what determines your winnings. On the other
hand, if you lose the all in, you are CERTAIN to not move up. If multiple
other people have already called the all-in, then there is a good chance that
SOMEONE will be knocked out, moving you up in the finishing position without
having to take a risk. Therefore, even if you have a positive expected return,
it makes more sense to avoid the 'ruin' situation and fold the pocket aces.

~~~
unabridged
I have an idea for a poker metric that measures your risk for entire
tournament. Each time you face elimination, your percent chance of surviving
is multiplied together. So let's say you finish a tournament and faced all in
3 times with these survival rates:

80% * 50% * 70% = 28%

Then you would have a maximum of 28% chance of reaching that point again.

~~~
cortesoft
An interesting idea, but the number of times you face an all-in is dependent
on how you do the rest of the game - if you win every hand, for example, you
are never going to face an all-in because you will always be big stack.

------
dvt
Taleb loses me at the end (Nicomachean ethics? come on), but I love his "take
risks, but don't take stupid risks" stance. A stance which I first encountered
in _How to Legally Own Another Person_ [1].

The key takeaway is twofold, in my opinion:

\- People underestimate tail risk in their day-to-day: Bob smokes and drinks
daily and eats red meat and doesn't exercise at all. These are "small" events
independent of each other, but the aggregate will probably lead to ruin
(probably a heart attack in his mid-50s).

\- People overestimate tail risk (due to a lack of courage[2]): Jill says
she's "risk averse" and that she won't start a business because it might ruin
her -- although in the grand scheme of things, 6 months of your life and a
$10,000 investment are not ruin-inducing.

[1]
[http://www.fooledbyrandomness.com/employee.pdf](http://www.fooledbyrandomness.com/employee.pdf)

[2] Maybe dragging ethics into this wasn't such a bad move after all.

~~~
theyregreat
Exactly. I take fish oil, eat a mostly plant-based diet, don’t smoke/drink but
live comfortably in a van for cheap. Why? Survival is easy. All risks are not
created equal, and following the heard may result in running off a cliff, ie
debt, consumption-oriented, spendy, no love/children/friends, workaholism /not
enjoying vacations/travel/life and worrying about the wrong things that don’t
really matter.

------
sn9
This seems to be a chapter in his book coming out in Feb. 2018. I'll probably
buy it because I usually find his books to be worth reading despite his
writing style, but I dearly hope he has an editor to fix the final product
from whatever this is.

------
theptip
> If someone used a standard cost-benefit analysis [for russian roulette], he
> would have claimed that one has 83.33% chance of gains, for an “expected”
> average return per shot of $833,333.

Forgot to consider the "cost" in the cost-benefit analysis. Most would value
their life significantly higher than 6*$1m, so the expected return is
negative.

~~~
late2part
Interesting that you think most people value their life over $6M USD. I bet
you that across the entire world, more than 50% of adults would take a 15%
chance of death to win $6M USD.

~~~
ryandrake
$6M is a life changing amount of money. Tempting...

~~~
bigmoneyajja
I have that kind of money and I can tell you: there is nothing up here worth
losing your life over. I certainly would not risk my life to keep my present
lifestyle vs. the one I had in my early twenties when I made and had much
less.

~~~
late2part
I wonder if you guys have spent time in Nepal, India or China or other poorer
countries.

------
qwtel
what is most fascinating about nassim's writings is the night-follows-day
certainty with which they will cause a flock of bitter nerds to crawl out of
their dimly-lit offices to unwittingly mutter one item or another from the
"nassim taleb is wrong because he's not one of us" list, which isn't so much
to proof him wrong, as it is to quickly find one another so that they can fall
into each others arms, delivering reassurances that they are not wrong in
opposing him, and that their distinct lack of real-world success stems from
the cruelness and wrongness of the outside world, not because their supposed
superior insights are not, in fact, so.

Their all-time favorite, to be repeated with ceremonial regularity, is that
his insights are 'trivial'. ignoring for a moment that these people also
believe it 'trival' to bring a product to market once it has been figured out
(by them) in the lab (if they even lower themselves to think in terms of
'products' for the 'populace'), treating his literary writings, which are
intended for the general public, which is very much in need of trivial
insights, as if they were his scientific writings, then dismissing his
scientific writings based on his literary writings, is simply malicious, owed
to the fact that he is the wrong kind of academic, who lifts weights and calls
people 'imbecile' on twitter, hence reflects poorly on their comic book-like
self image of scientists as some kind of quasi-master-race, which is above
cursing, emotion and whose physicality is a mere inconvenience, getting in the
way of dreaming up ever more distant alternate realities, to be sold to the
public as discovery.

~~~
almostarockstar
You should consider using more punctuation. This entire post has only 3
sentences. It's essentially unreadable. You might have a good point, but I
can't understand anything you've said.

~~~
qwtel
i know. it is sort of fun writing it like that, which is why i did it.

~~~
soVeryTired
Not much fun reading it though.

------
fanzhang
Economics and statistics understand perfectly well the distinction between
gambling returns that happen in parallel versus gambling returns that happen
in series.

For example, in financial economics, suppose you're investing and the returns
of each stock during each year is an iid random variable R.

If you split your money over 10 stocks and 1 year, you take the arithmetic
expectancy of R to calculated your expected returns. If you're investing in 1
stock over 10 years, you should take the geometric average of R to get your
average yearly return.

In other words, if each stock returns 50% chance 3x and 50% chance 0x, you
should definitely invest if you can split your money over a pool of 100
independent stocks over the one time period (you'll have > 99% chance of >100%
gross returns!), but definitely not if you were forced to invest in one single
stock for 100 time periods (you have a >99.999% of 0% gross returns).

I'm not sure why NNT feels it necessary to say the smartest Nobel Prize
winning physicists are needed to under this. I'm pretty sure anyone with a
rigorous high school math education can understand this. As a matter of point,
the above distinction was belabored to me in undergraduate statistics class
and a first year graduate economics class, both many many years ago.

------
falsedan
What is this word salad

------
StanislavPetrov
>You can safely calculate, from your sample, that about 1% of the gamblers
will go bust. And if you keep playing and playing, you will be expected have
about the same ratio, 1% of gamblers over that time window.

This whole argument is based on the false assumption that all gamblers have
the same chance of winning and losing. In reality (and depending on the game),
skill results in a massive disparity in outcome among gamblers. This is
similarly true for the author's stock market analogy. As the weaker players
are flushed out, you should expect the rate of "bust outs" to slow.

~~~
dvt
Gambling games that require skill are in the objective minority (Blackjack,
Poker, and sports betting); Roulette, Craps, War, Keno, Slots, Bingo, etc.,
are all purely luck-based.

NB: Even in skill-based games, the odds still always (and I mean _always_ )
favor the house. Taleb is right. If you play any game for an infinite amount
of time at a casino, you _will_ go bust.

~~~
dragonwriter
Even if the odds favor you on each event, if they are < 100% success and you
play long enough, you will eventually lose everything. Gambler's Ruin always
wins.

~~~
aurelianito
Just not true. Let's say I do a bet of 1 dollar, and I have 50% chance of
winning, and I can make 1000 bets every day. If start with 100 dollars it is
much more likely to die with a lot of money than broke.

~~~
gerard
You've defined a fair game [1], one which you're just as likely to win or
lose, by the same amount each time. Clearly neither side of this game has an
advantage over the other, for any given coin toss. But you expect that long
term it will work in your favour?

[1] technically it's a martingale
[https://en.wikipedia.org/wiki/Martingale_(probability_theory...](https://en.wikipedia.org/wiki/Martingale_\(probability_theory\))

------
ianamartin
My god, this guy is so irritating. As usual, he’s not completely wrong, but he
is entirely useless. It’s tautology wrapped in a bunch of garbled language and
name-dropping.

Lesson #1: you don’t know where you fall in a statistical distribution.

No shit. Really? There is a 1% chance of failure in a game. If I play the game
an infinite number of times, I will fail to win 1% of the time.

The alleged twist: if the nature of the game is that a single failure results
in you no longer being able to participate in the game, you cannot play the
game an infinite number of times.

Color my mind blown. I’m absolutely shocked that Murray Gell-Mann was able to
grasp this concept so quickly. A friend of mine has a PhD in Statistics, and I
remember once telling him that the probability of a fair coin tossed an
infinite number of times and landing on heads approaches 50%. He grasped that
concept immediately. I was impressed. We started a company to explain this.

I’m kidding. That didn’t happen because it would be stupid in every possible
way.

Lesson #2: Cost benefit analysis is impossible if you don’t know where you are
in the distribution.

Somehow, not knowing where you are in the range of possible outcomes means
that you can’t know the range of possible outcomes.

The alleged twist: even when you know the range of possible outcomes, you
apparently don’t.

I don’t even know where to start with this. I can’t even make fun of it
properly.

When you’re gambling, the range of possible outcomes is well-defined for most
people. You lose all your money, you break even, or you win. This model is
incomplete because it doesn’t take into account human pathology. Some people
cheat. Some people take out loans from shady people and lose everything and
get a leg broken. Some people commit suicide because they lost a few hundred
dollars.

Guess what, if you include human behavior in the model of Russian roulette,
it’s also a broken model. People will cheat. People will chicken out. People
will want to die and pull the trigger twice. That does not make a cost/benefit
analysis “undefinable,” it just makes the range of possible outcomes larger
than most people are comfortable with. That’s why most people choose to not
play the game.

What Taleb does here (again) is conflate a legitimate criticism of frequentist
statistical models with a critique of incomplete decision models.

But the critique of frequentist models is entirely sophomoric. It boils down
to, “You can’t do something an infinite number of times. Therefore, everything
we know is wrong.”

Fuck off, man. This gets addressed in every basic statistics intro. I’m not
saying there aren’t problems with the idea. There are. But it’s the same
problem as saying that the limit of x as y approaches infinity is 1. Is
calculus fucked and completely broken because y never gets to infinity?

No. Not even a little.

Lesson #3: social sciences are broken.

Completely agree.

The alleged twist: They are broken because the basic concept of probability is
broken.

The absolute comically stupid apex of all of this linguistic garbage is that
Taleb uses the social sciences to prove how little his argument makes sense.

He can’t tell the difference between a bad model and a broken system for
creating models. He proudly claims that probability is broken because the
models that people create to estimate outcomes are bad. And he uses the worst
possible collection of models for the most complex systems that we know of as
proof that the system is broken.

There are absolutely legitimate criticisms not only of frequentist statistics,
but of inductive logic in general. It’s only a conversation that’s been going
on since, I don’t know, Pythogras? Maybe before?

There is nothing new or insightful here. Taleb is wrapping up and idea that
everyone already knows in a word cluster so obtuse my 5 fifth-grade English
teacher would destroy as some deep new insight.

But hey, he posted it on Medium, so it must be awesome.

Give me a break.

~~~
dsjoerg
I was hoping to find a comment about how terrible the writing was, thank you.
The writing is terrible.

~~~
ianamartin
I’m not completely convinced that Taleb isn’t an elaborate hoax designed to
win the Turing test.

Look, I don’t like being so harsh. But the guy gets really fundamental things
really really wrong. He makes me angrier than Gladwell combined with the
Freakonomics guys. Which is a lot.

There’s a genre of pop-statistics that is genuinely bad for people. And it all
follows the same pattern. Science says x is true, but if you think about it,
surprising y thing is true. Because I’m a special snowflake, and so are you.

It’s absolute garbage.

And I don’t even know that much about stats. I dropped out of a violin
performance and music theory degree. I just read a few stats textbooks and had
some good professional mentors because some of this is relevant to my work as
a data engineer.

This is basic level stuff that’s going wrong. The guy is clueless about
everything he writes. But I’ll give it to him that he’s a genius at marketing.

He publishes books and makes money off of them. I do not. So he wins that way.
But if I ever write a book, it won’t be in the fiction section where his
belongs.

~~~
pizza
> He can’t tell the difference between a bad model and a broken system for
> creating models. He proudly claims that probability is broken because the
> models that people create to estimate outcomes are bad. And he uses the
> worst possible collection of models for the most complex systems that we
> know of as proof that the system is broken.

> And I don’t even know that much about stats. I dropped out of a violin
> performance and music theory degree. I just read a few stats textbooks and
> had some good professional mentors because some of this is relevant to my
> work as a data engineer.

 _eye roll_

> What Taleb does here (again) is conflate a legitimate criticism of
> frequentist statistical models with a critique of incomplete decision
> models.

 _double take_

..bullshit?

His critique:

* E[f(x)] is different from E[x] ~ X

* Jensen's inequality => use a convex decision/utility function/behavior whenever you are exposed to events out of your control

* Things like coin flip outcomes or binary true/false observations ( __i.e. not simply just probabilities! __) depend on their zeroth-moment; no dependency on magnitude of the moment, such as in outcomes that are made complex by outcome-dependency upon higher-order moments

* Bet on processes whose payoff f(x) has a stochastic first moment, but don't do so stupidly, and bet rarely

* Any uncertainty about the generating process concerning higher-order moments of very small probability outcomes makes the payoff more attractive for this class of complex outcomes

If you care at all, the preprint of Silent Risk will more than satisfy for a
collection of proofs. Hell even the page I just happened to be looking at
would [0]

[0] [https://i.imgur.com/W1V3jyV.png?1](https://i.imgur.com/W1V3jyV.png?1)

~~~
ianamartin
All of that hinges on your 3rd bullet point.

And translating into plain English means that if you bet your life or your
life saving on the outcome of a coin toss, you're an idiot.

That's not an effective critique of frequentist statistical theory. That's
obvious.

The absolute best and most courteous interpretation of this is, "Don't be an
idiot." Thanks, Taleb, for that most insightful piece of advice.

The problem here is that this series of thoughts ascribes an attribute to
statistical theory that is absolutely non-existent: that there is any weight
given to any individual trial of an outcome. No one thinks that. He, and now
you, are arguing against a completely non-existent point.

Look, if you have anything approaching a decent model, you know the range of
possible outcomes.

If you want to understand flipping a fair coin, the outcomes are that there's
a heads and a tails and it has to land on one of them.

What Taleb and you are arguing is that there's a third possibility you didn't
think of where the coin lands in an alternative universe, and Margaret
Thatcher is the queen of the United States, and all the puppies die.

And if that's the real set of possibilities, I want to play that game. Because
I would trade all of the puppies for Queen Thatcher right now. (No offense to
you if you are actually a puppy. I just really don't like Trump.)

Saying that statistics is broken because infinity doesn't exist is like saying
that basic arithmetic is broken if 1 equals 0.

No shit. Yeah, when you change the basic rules of how things work, things get
broken really quickly.

I'm sorry if I'm being obtuse here, but I don't see anything even remotely
worthwhile in his article, your post, or your link.

Trying to work backwards from a stochastic model to the outcome of an
individual event will never work. And no one thinks that it will. If you want
certainty, you have to use an entirely different model of logic.

Taleb is criticizing inductive logic for not being deductive, and then
claiming that you'll get just as good a result by limiting the inductive model
and using only a portion of what makes it work.

It's utter and complete bullshit, as far as I can tell.

Happy to be corrected, if you think I'm wrong.

Can we agree that a charitable plain english version of what he's trying to
say is this:

Don't play the long odds if you aren't playing the long game. If you're
playing the short game, don't risk more than you can afford to lose.

If he's saying more than that, please enlighten me. In my first post on this
topic I said that he wasn't entirely wrong, but he is entirely useless. If
you've made it past the age of 5 and haven't figured this out already, well,
reading his book won't help you because you won't understand a goddamn word in
it.

I will read his next book because it's unfair to criticize something you
haven't exposed yourself to. But I can't imagine any possible way that it
could comprise a collection of proofs.

Again, what is there to prove? That probability doesn't work good when you
change the rules about how it works? That you can't take a distribution and
apply it to a single point in time? That you can't take a collection of
inductive data and use it to prove a universal truth?

No sane person has ever claimed that you can do any of those things. All I can
tell is that Taleb is claiming to be both interesting and original by writing
a book that says you can't do any of those things.

Welcome to the grown-up world, Taleb. We already knew that.

P.S. Santa Clause isn't real. Maybe I should write a book about how not real
Santa is. I bet Murray Gell-Mann would immediately grasp this concept.

~~~
pizza
[https://imgur.com/a/OAd0W](https://imgur.com/a/OAd0W)

------
soVeryTired
I don't see why Taleb thinks Kelly, Shannon and co. are the only ones who
understand ruin. Daniel Bernoulli formalised what is now known as the Kelly
crterion in the 1700's. Kelly and Shannon's contribution was to demonstrate a
connection to information theory, which was incredibly fashionable in the
first half of the 20th Century.

It's really quite an old result.

------
tritium

      Recall from the previous chapter that 
      to do science (and other nice things) 
      requires survival t not the other way 
      around?
    

Uh, what?

I don't understand that sentence? Is " _survival t_ " a typographical error?

~~~
mikeash
I believe the t is meant to be a comma. The author is saying that you can
potentially survive without science, but you can't do science if you're dead.

~~~
pizza
Correct

------
malmsteen
tl dr ?

~~~
zaptheimpaler
Taleb trying to sound smart.

~~~
late2part
It's my opinion that:

1\. Taleb is incredibly smart 2\. He is not being arrogant or trying to
impress (any more than any other writer) 3\. He is writing about something
that is interesting and not everyone knows

~~~
dragonwriter
On #2, when he mispresents fields that have addressed the effect he is
discussing routinely for decades as not merely addressing it inadequately but
of missing it entirely, he's either being self-aggrandizingly hyperbolic or
stunningly ignorant. Or maybe its a rhetorical trick to get the reader to pay
attention because they are getting a nugget of secret wisdom. But certainly
plenty of other writers don't play that kind of dishonest game.

~~~
feulistia
I've noticed a formula many seem to follow in drawing attention to ones self
as a public intellectual:

1\. Find some topic or idea that's interesting and fairly well-accepted but
not receiving a lot of attention. 2\. Claim ownership over the topic or idea
as if the prior literature doesn't exist. Invent new terminology ideally;
abandon existing terms to eliminate a trail to prior authors and ideas. 3\.
Present the idea or topic in a sensationalistic way. 4\. When furor inevitably
develops about lack of acknowledgement of previous writers and researchers and
issues identified by them, frame argument as being about ad hominem critics
versus solidity of your own arguments. Sidestep actual criticisms.

~~~
ianamartin
This is a much better description of the phenomenon than I could come up with.
I mentioned it elsewhere in the thread, but this is truly better than what I
described.

Well done.

------
TailorJones
Another non-closable pop-up window.

~~~
executesorder66
Just use uBlock Origin.

