I feel like this idea of 'tail risk' is pretty well understood in the gambling community, especially in the poker world of no-limit holdem tournaments.
For example, everyone knows that pocket Aces are the best possible starting hand in poker, and the default strategy is to call any all-in call if you have them; your expected value will always be positive, no matter how many other people call or what they have.
However, there is a well known exception to this rule, and many poker books talk about it - if you are the small stack at the final table, it might make sense to fold your pocket aces pre-flop if more than one larger stack has already called.
The reasoning is that even if you win the hand, you aren't certain to move up in finishing position, which is what determines your winnings. On the other hand, if you lose the all in, you are CERTAIN to not move up. If multiple other people have already called the all-in, then there is a good chance that SOMEONE will be knocked out, moving you up in the finishing position without having to take a risk. Therefore, even if you have a positive expected return, it makes more sense to avoid the 'ruin' situation and fold the pocket aces.
I have an idea for a poker metric that measures your risk for entire tournament. Each time you face elimination, your percent chance of surviving is multiplied together. So let's say you finish a tournament and faced all in 3 times with these survival rates:
80% * 50% * 70% = 28%
Then you would have a maximum of 28% chance of reaching that point again.
An interesting idea, but the number of times you face an all-in is dependent on how you do the rest of the game - if you win every hand, for example, you are never going to face an all-in because you will always be big stack.
Similarly good bankroll management always accounts for risk of ruin for the stakes and game type. Limit holdem and pot limit omaha have vastly different requirements due to variance, for example. Also why the top players are so frequently staked by 3rd parties, they recognize the need to reduce their risk.
>your expected value will always be positive, no matter how many other people call or what they have.
It depends on how much of an "edge" you expect to have on future play as well, and the risk you expect of becoming unable to fund exploiting that edge in the future. Not every positive-value bet will maximize your expected overall return.
Taleb loses me at the end (Nicomachean ethics? come on), but I love his "take risks, but don't take stupid risks" stance. A stance which I first encountered in How to Legally Own Another Person[1].
The key takeaway is twofold, in my opinion:
- People underestimate tail risk in their day-to-day: Bob smokes and drinks daily and eats red meat and doesn't exercise at all. These are "small" events independent of each other, but the aggregate will probably lead to ruin (probably a heart attack in his mid-50s).
- People overestimate tail risk (due to a lack of courage[2]): Jill says she's "risk averse" and that she won't start a business because it might ruin her -- although in the grand scheme of things, 6 months of your life and a $10,000 investment are not ruin-inducing.
Exactly. I take fish oil, eat a mostly plant-based diet, don’t smoke/drink but live comfortably in a van for cheap. Why? Survival is easy. All risks are not created equal, and following the heard may result in running off a cliff, ie debt, consumption-oriented, spendy, no love/children/friends, workaholism
/not enjoying vacations/travel/life and worrying about the wrong things that don’t really matter.
This seems to be a chapter in his book coming out in Feb. 2018. I'll probably buy it because I usually find his books to be worth reading despite his writing style, but I dearly hope he has an editor to fix the final product from whatever this is.
> If someone used a standard cost-benefit analysis [for russian roulette], he would have claimed that one has 83.33% chance of gains, for an “expected” average return per shot of $833,333.
Forgot to consider the "cost" in the cost-benefit analysis. Most would value their life significantly higher than 6*$1m, so the expected return is negative.
There are different ways to measure how much people value their lives. Some of the most interesting involve "revealed preferences", ie. what people actually do, not what they say.
For example, how much higher wages do people demand for riskier work, controlling for other factors? Or, how much extra do they pay for, say, a car with a better safety record?
I just checked the Wikipedia article on the "value of a statistical life", and apparently most current US estimates are around $9 million. This is up from around $7 million a decade ago.
Interesting that you think most people value their life over $6M USD. I bet you that across the entire world, more than 50% of adults would take a 15% chance of death to win $6M USD.
I have that kind of money and I can tell you: there is nothing up here worth losing your life over. I certainly would not risk my life to keep my present lifestyle vs. the one I had in my early twenties when I made and had much less.
When calculating expected cost/benefit, you take the value of the outcome multiplied by the probability of that outcome. So the expected cost to play Taleb's game of Russian Roulette is 1/6 * $6m = $1m, if you value your life at $6m.
Note that's a bit higher than the expected benefit of $0.833m, I kept the numbers round for brevity. The breakeven point would be:
5/6 * C_win = 1/6 * C_lose
C_win = 1m
C_lose = 5/6 * 6/1 * 1m
= $5m
My point was that Taleb omitted this calculation entirely, which makes it an incomplete cost/benefit analysis.
Of course he forgot about the cost part of the equation. He’s the Deepak Chopra of the Malcolm Gladwell of the Freakonomics guys of the herbal remedy people.
what is most fascinating about nassim's writings is the night-follows-day certainty with which they will cause a flock of bitter nerds to crawl out of their dimly-lit offices to unwittingly mutter one item or another from the "nassim taleb is wrong because he's not one of us" list, which isn't so much to proof him wrong, as it is to quickly find one another so that they can fall into each others arms, delivering reassurances that they are not wrong in opposing him, and that their distinct lack of real-world success stems from the cruelness and wrongness of the outside world, not because their supposed superior insights are not, in fact, so.
Their all-time favorite, to be repeated with ceremonial regularity, is that his insights are 'trivial'. ignoring for a moment that these people also believe it 'trival' to bring a product to market once it has been figured out (by them) in the lab (if they even lower themselves to think in terms of 'products' for the 'populace'), treating his literary writings, which are intended for the general public, which is very much in need of trivial insights, as if they were his scientific writings, then dismissing his scientific writings based on his literary writings, is simply malicious, owed to the fact that he is the wrong kind of academic, who lifts weights and calls people 'imbecile' on twitter, hence reflects poorly on their comic book-like self image of scientists as some kind of quasi-master-race, which is above cursing, emotion and whose physicality is a mere inconvenience, getting in the way of dreaming up ever more distant alternate realities, to be sold to the public as discovery.
You should consider using more punctuation. This entire post has only 3 sentences. It's essentially unreadable. You might have a good point, but I can't understand anything you've said.
Economics and statistics understand perfectly well the distinction between gambling returns that happen in parallel versus gambling returns that happen in series.
For example, in financial economics, suppose you're investing and the returns of each stock during each year is an iid random variable R.
If you split your money over 10 stocks and 1 year, you take the arithmetic expectancy of R to calculated your expected returns. If you're investing in 1 stock over 10 years, you should take the geometric average of R to get your average yearly return.
In other words, if each stock returns 50% chance 3x and 50% chance 0x, you should definitely invest if you can split your money over a pool of 100 independent stocks over the one time period (you'll have > 99% chance of >100% gross returns!), but definitely not if you were forced to invest in one single stock for 100 time periods (you have a >99.999% of 0% gross returns).
I'm not sure why NNT feels it necessary to say the smartest Nobel Prize winning physicists are needed to under this. I'm pretty sure anyone with a rigorous high school math education can understand this. As a matter of point, the above distinction was belabored to me in undergraduate statistics class and a first year graduate economics class, both many many years ago.
>You can safely calculate, from your sample, that about 1% of the gamblers will go bust. And if you keep playing and playing, you will be expected have about the same ratio, 1% of gamblers over that time window.
This whole argument is based on the false assumption that all gamblers have the same chance of winning and losing. In reality (and depending on the game), skill results in a massive disparity in outcome among gamblers. This is similarly true for the author's stock market analogy. As the weaker players are flushed out, you should expect the rate of "bust outs" to slow.
Gambling games that require skill are in the objective minority (Blackjack, Poker, and sports betting); Roulette, Craps, War, Keno, Slots, Bingo, etc., are all purely luck-based.
NB: Even in skill-based games, the odds still always (and I mean always) favor the house. Taleb is right. If you play any game for an infinite amount of time at a casino, you will go bust.
Even if the odds favor you on each event, if they are < 100% success and you play long enough, you will eventually lose everything. Gambler's Ruin always wins.
That’s not true at all. If you bet 100% of your bankroll then yes. But you’re not factoring in that you calculate what percentage of your bankroll to bet given certain odds.
If given 51% odds every time you will win considering you manage your bankroll well.
This also requires some assumptions: That there is no minimum stake and stakes are infinitely divisible. They're important assumptions because finding yourself near zero can easily happen on "full Kelly", especially if it's not a game of perfect information.
This implies that both side of the transaction will inevitibly go bust, bookmaker and gambler alike. It may be true, but it's like saying we're all going to die at some point. The expected times are long enough that it's mostly not worth worrying about.
If the positive-expectation gambler can reduce his stakes he will find a comfortable expected time-to-ruin, which might be hundereds of years if he was able to quantify it. In this position, he should continue to place his bets.
We make hundreds of small positive-ER decisions like this every day and we don't shy away from them for fear of the black swan.
Just not true. Let's say I do a bet of 1 dollar, and I have 50% chance of winning, and I can make 1000 bets every day. If start with 100 dollars it is much more likely to die with a lot of money than broke.
You've defined a fair game [1], one which you're just as likely to win or lose, by the same amount each time. Clearly neither side of this game has an advantage over the other, for any given coin toss. But you expect that long term it will work in your favour?
@dragonwriter is right, because we're talking about infinities. There is some non-zero chance you'll flip a coin heads 10,000 times in a row. In an infinite amount of time, that series of events is guaranteed to happen at some point (Poincaré makes sure of that, although it might take billions or trillions of years).
> NB: Even in skill-based games, the odds still always (and I mean always) favor the house.
In BlackJack, card counting can reveal plays where the odds favor the gambler. In sports betting, which can be though of as a market with high commissions, skilled players consistently make decisions with positive expectation.
True, but it's very hard to "get away" with card counting at a casin, after Ed Thorp popularized it. Taleb knows this too; he wrote the foreword for Dr. Thorp's memoir.
Also true in sports betting, but you aren't gaining an edge on the house so much as the other bettors. Assuming the house is intelligent they will make the market well enough to cover their risk.
My favorite example of them failing to do this is when Leicester City were Premier League champions two seasons ago [1].
A good point, in that the house is not necessarily in a nett negative-ER position due to the skilled gambler. In totalizator markets the house will always make x% of the pool. In fixed-price markets the house could face a negative position overall but that is mitigated by bet limits and a quick banhammer. For gambler who is +ER though, the house must be negative in respect to that single transaction.
It's a misconception that the house always makes money with respect to an individual customer, given enough time. I don't accuse you of this, I just mean generally. Let's take fixed markets. Overrounds of up to 25% are common, and if you back a prospect blindly you'll find that your -ER approaches that. But you're free to chose prospects and stakes in the same way Warren Buffet chooses stocks. With sufficient information and a high enough filter as Teleb calls it, you are free to pick off the bargains.
The information available to market participants is reflected in the price of prospects. Any one gambler may posess information others don't. It won't be perfect information, and most times the overround will absorb the difference, leaving the gambler with no option but to place a negative-ER bet or no bet at all. But when when the match is over and the waveform collapses and the fans have gone home and all prices are irrevocably shown to have been "wrong" to some degree, you can see that it's really an information game, not one based on luck, and from that it's unfair to say all players are eventually doomed.
Sure, the people going bust will slow and you'll end up with higher skilled folks remaining at the end, but the argument still holds. So long as your risk of ruin is non-zero you will eventually go bust.
My god, this guy is so irritating. As usual, he’s not completely wrong, but he is entirely useless. It’s tautology wrapped in a bunch of garbled language and name-dropping.
Lesson #1: you don’t know where you fall in a statistical distribution.
No shit. Really? There is a 1% chance of failure in a game. If I play the game an infinite number of times, I will fail to win 1% of the time.
The alleged twist: if the nature of the game is that a single failure results in you no longer being able to participate in the game, you cannot play the game an infinite number of times.
Color my mind blown. I’m absolutely shocked that Murray Gell-Mann was able to grasp this concept so quickly. A friend of mine has a PhD in Statistics, and I remember once telling him that the probability of a fair coin tossed an infinite number of times and landing on heads approaches 50%. He grasped that concept immediately. I was impressed. We started a company to explain this.
I’m kidding. That didn’t happen because it would be stupid in every possible way.
Lesson #2: Cost benefit analysis is impossible if you don’t know where you are in the distribution.
Somehow, not knowing where you are in the range of possible outcomes means that you can’t know the range of possible outcomes.
The alleged twist: even when you know the range of possible outcomes, you apparently don’t.
I don’t even know where to start with this. I can’t even make fun of it properly.
When you’re gambling, the range of possible outcomes is well-defined for most people. You lose all your money, you break even, or you win. This model is incomplete because it doesn’t take into account human pathology. Some people cheat. Some people take out loans from shady people and lose everything and get a leg broken. Some people commit suicide because they lost a few hundred dollars.
Guess what, if you include human behavior in the model of Russian roulette, it’s also a broken model. People will cheat. People will chicken out. People will want to die and pull the trigger twice. That does not make a cost/benefit analysis “undefinable,” it just makes the range of possible outcomes larger than most people are comfortable with. That’s why most people choose to not play the game.
What Taleb does here (again) is conflate a legitimate criticism of frequentist statistical models with a critique of incomplete decision models.
But the critique of frequentist models is entirely sophomoric. It boils down to, “You can’t do something an infinite number of times. Therefore, everything we know is wrong.”
Fuck off, man. This gets addressed in every basic statistics intro. I’m not saying there aren’t problems with the idea. There are. But it’s the same problem as saying that the limit of x as y approaches infinity is 1. Is calculus fucked and completely broken because y never gets to infinity?
No. Not even a little.
Lesson #3: social sciences are broken.
Completely agree.
The alleged twist: They are broken because the basic concept of probability is broken.
The absolute comically stupid apex of all of this linguistic garbage is that Taleb uses the social sciences to prove how little his argument makes sense.
He can’t tell the difference between a bad model and a broken system for creating models. He proudly claims that probability is broken because the models that people create to estimate outcomes are bad. And he uses the worst possible collection of models for the most complex systems that we know of as proof that the system is broken.
There are absolutely legitimate criticisms not only of frequentist statistics, but of inductive logic in general. It’s only a conversation that’s been going on since, I don’t know, Pythogras? Maybe before?
There is nothing new or insightful here. Taleb is wrapping up and idea that everyone already knows in a word cluster so obtuse my 5 fifth-grade English teacher would destroy as some deep new insight.
But hey, he posted it on Medium, so it must be awesome.
I think Taleb likes to think of himself as a scholar so smart that has destroyed mainstream scholarship and his contributions are not being recognized by the lesser scholars. But he's smart enough to recognize that he's just not that deep of a thinker or has anything particularly insightful to say (outside of some financial modeling topics perhaps) so he uses all that pedantic prose and name dropping to cover for it. He's certainly cashing on it so it's not like he's going to stop but I can't shake the feeling that he knows, is bitter and that shows in his writing.
I’m not completely convinced that Taleb isn’t an elaborate hoax designed to win the Turing test.
Look, I don’t like being so harsh. But the guy gets really fundamental things really really wrong. He makes me angrier than Gladwell combined with the Freakonomics guys. Which is a lot.
There’s a genre of pop-statistics that is genuinely bad for people. And it all follows the same pattern. Science says x is true, but if you think about it, surprising y thing is true. Because I’m a special snowflake, and so are you.
It’s absolute garbage.
And I don’t even know that much about stats. I dropped out of a violin performance and music theory degree. I just read a few stats textbooks and had some good professional mentors because some of this is relevant to my work as a data engineer.
This is basic level stuff that’s going wrong. The guy is clueless about everything he writes. But I’ll give it to him that he’s a genius at marketing.
He publishes books and makes money off of them. I do not. So he wins that way. But if I ever write a book, it won’t be in the fiction section where his belongs.
> He can’t tell the difference between a bad model and a broken system for creating models. He proudly claims that probability is broken because the models that people create to estimate outcomes are bad. And he uses the worst possible collection of models for the most complex systems that we know of as proof that the system is broken.
> And I don’t even know that much about stats. I dropped out of a violin performance and music theory degree. I just read a few stats textbooks and had some good professional mentors because some of this is relevant to my work as a data engineer.
eye roll
> What Taleb does here (again) is conflate a legitimate criticism of frequentist statistical models with a critique of incomplete decision models.
double take
..bullshit?
His critique:
* E[f(x)] is different from E[x] ~ X
* Jensen's inequality => use a convex decision/utility function/behavior whenever you are exposed to events out of your control
* Things like coin flip outcomes or binary true/false observations ( i.e. not simply just probabilities! ) depend on their zeroth-moment; no dependency on magnitude of the moment, such as in outcomes that are made complex by outcome-dependency upon higher-order moments
* Bet on processes whose payoff f(x) has a stochastic first moment, but don't do so stupidly, and bet rarely
* Any uncertainty about the generating process concerning higher-order moments of very small probability outcomes makes the payoff more attractive for this class of complex outcomes
If you care at all, the preprint of Silent Risk will more than satisfy for a collection of proofs. Hell even the page I just happened to be looking at would [0]
And translating into plain English means that if you bet your life or your life saving on the outcome of a coin toss, you're an idiot.
That's not an effective critique of frequentist statistical theory. That's obvious.
The absolute best and most courteous interpretation of this is, "Don't be an idiot." Thanks, Taleb, for that most insightful piece of advice.
The problem here is that this series of thoughts ascribes an attribute to statistical theory that is absolutely non-existent: that there is any weight given to any individual trial of an outcome. No one thinks that. He, and now you, are arguing against a completely non-existent point.
Look, if you have anything approaching a decent model, you know the range of possible outcomes.
If you want to understand flipping a fair coin, the outcomes are that there's a heads and a tails and it has to land on one of them.
What Taleb and you are arguing is that there's a third possibility you didn't think of where the coin lands in an alternative universe, and Margaret Thatcher is the queen of the United States, and all the puppies die.
And if that's the real set of possibilities, I want to play that game. Because I would trade all of the puppies for Queen Thatcher right now. (No offense to you if you are actually a puppy. I just really don't like Trump.)
Saying that statistics is broken because infinity doesn't exist is like saying that basic arithmetic is broken if 1 equals 0.
No shit. Yeah, when you change the basic rules of how things work, things get broken really quickly.
I'm sorry if I'm being obtuse here, but I don't see anything even remotely worthwhile in his article, your post, or your link.
Trying to work backwards from a stochastic model to the outcome of an individual event will never work. And no one thinks that it will. If you want certainty, you have to use an entirely different model of logic.
Taleb is criticizing inductive logic for not being deductive, and then claiming that you'll get just as good a result by limiting the inductive model and using only a portion of what makes it work.
It's utter and complete bullshit, as far as I can tell.
Happy to be corrected, if you think I'm wrong.
Can we agree that a charitable plain english version of what he's trying to say is this:
Don't play the long odds if you aren't playing the long game. If you're playing the short game, don't risk more than you can afford to lose.
If he's saying more than that, please enlighten me. In my first post on this topic I said that he wasn't entirely wrong, but he is entirely useless. If you've made it past the age of 5 and haven't figured this out already, well, reading his book won't help you because you won't understand a goddamn word in it.
I will read his next book because it's unfair to criticize something you haven't exposed yourself to. But I can't imagine any possible way that it could comprise a collection of proofs.
Again, what is there to prove? That probability doesn't work good when you change the rules about how it works? That you can't take a distribution and apply it to a single point in time? That you can't take a collection of inductive data and use it to prove a universal truth?
No sane person has ever claimed that you can do any of those things. All I can tell is that Taleb is claiming to be both interesting and original by writing a book that says you can't do any of those things.
Welcome to the grown-up world, Taleb. We already knew that.
P.S. Santa Clause isn't real.
Maybe I should write a book about how not real Santa is. I bet Murray Gell-Mann would immediately grasp this concept.
I don't see why Taleb thinks Kelly, Shannon and co. are the only ones who understand ruin. Daniel Bernoulli formalised what is now known as the Kelly crterion in the 1700's. Kelly and Shannon's contribution was to demonstrate a connection to information theory, which was incredibly fashionable in the first half of the 20th Century.
I believe the t is meant to be a comma. The author is saying that you can potentially survive without science, but you can't do science if you're dead.
If, at a casino, there is a 1% chance of going bust, then given N people attending the casino approx 1% will go bust, no matter the size of N.
However, if a single person goes to the casino N times, their odds of going bust approach 100% as N gets larger, because as soon as you go bust you can no longer keep playing.
In any kind of iterated game, concern about tail risks is justified when the tail risk means you can no longer play the game.
Social science and economic research that attempts to analyze the "rationality" of human behavior has thus far failed to take this logic into consideration, falsely portraying people as irrationally "loss-averse". This oversight in the research community is obvious to successful investors, insurers, etc. because the latter have personal investment in the outcome of these iterated situations that the former simply do not. [This is only my attempt at a summary of his argument, not a defense]
> Social science and economic research that attempts to analyze the "rationality" of human behavior has thus far failed to take this logic into consideration, falsely portraying people as irrationally "loss-averse".
And here’s where he goes off the tracks: even a quick search of Google Scholar will show that it, at best, grossly overstates the case to say that social science (and particularly economics, and even more specifically the subfield of “decision theory” that he calls out) research “thus far” has neglected tail risks in addressing rationality (it's also incorrect to say that there is some kind of consensus result that people vary from rationality consistently in the direction of loss-aversity.)
As a research psychologist, I found his rant particularly upsetting because the very thing he's ranting about has been dealt with in the social sciences for years and years and years.
One of the reasons this issue keeps recurring is because, taken to its logical conclusion, the "time" probability paradigm (to use his term, there are better terms that have been used) as applied to persons' behaviors, only applies to that single individual, which then leads to a sort of paradox. Although we might want to know the probabilities for Theodorus Ibn Warqa, this is in principle unknowable because there is only one Theodorus Ibn Warqa. So when he is ruined, you can in theory say nothing about Maximillian Samuel Warqa.
At some point you have to invoke a counterfactual of sorts, and use a different person as a substitute, and this is why the "ensemble" probability model is invoked.
Taleb is right that the consequences of a ruin event are different at the individual level, but so what? The cost estimate is different from the probability estimate. It's not like people don't understand this. Maybe in the strawman literature he reads, but not in what anyone else is thinking. What's more typical is a ruin event one way or a ruin event another.
This is difficult stuff if you go beyond the toy gambler scenario he's dealing with. Just for one example:
Loss aversion is a human bias that has fairly obvious evolutionary origins: fear is a stronger behavioral motivator than risk-seeking tendency because it confers a survival advantage (‘you only die once’).
He seems to have a better grasp of probability than most people (me included). Perhaps he's not the greatest at digesting his ideas into an easy-to-grok form, but he seems to be trying his best. You don't have to listen to everything he says but you also shouldn't dismiss it all because his presentation is lacking in eloquence. I think this is especially true since he has a background he shares with very few other business-esque advice-givers. When you find a person like that, someone outside the norm, you really aught to listen to the person and learn to read between the lines or reinterpret it into a form that you can understand, since finding people at that fringe is hard enough. Requiring that the person be a perfect communicator is too much to ask and you're doing yourself a disservice by ignoring them.
1. Taleb is incredibly smart
2. He is not being arrogant or trying to impress (any more than any other writer)
3. He is writing about something that is interesting and not everyone knows
On #2, when he mispresents fields that have addressed the effect he is discussing routinely for decades as not merely addressing it inadequately but of missing it entirely, he's either being self-aggrandizingly hyperbolic or stunningly ignorant. Or maybe its a rhetorical trick to get the reader to pay attention because they are getting a nugget of secret wisdom.
But certainly plenty of other writers don't play that kind of dishonest game.
I've noticed a formula many seem to follow in drawing attention to ones self as a public intellectual:
1. Find some topic or idea that's interesting and fairly well-accepted but not receiving a lot of attention.
2. Claim ownership over the topic or idea as if the prior literature doesn't exist. Invent new terminology ideally; abandon existing terms to eliminate a trail to prior authors and ideas.
3. Present the idea or topic in a sensationalistic way.
4. When furor inevitably develops about lack of acknowledgement of previous writers and researchers and issues identified by them, frame argument as being about ad hominem critics versus solidity of your own arguments. Sidestep actual criticisms.
This is a much better description of the phenomenon than I could come up with. I mentioned it elsewhere in the thread, but this is truly better than what I described.
The phrase "doesn't blindly accept x" usually has an implication that the subject is aware of x, so you seem to be totally avoiding the argument being made that Taleb seems unaware of work done on the topics he writes about.
You are begging the question. I posit that he is aware of it and is presenting it differently than others.
The disagreement is whether he understands A. The issue and B. What others said. The whole of acadamia is full of different perspectives / why not here from Taleb?
> You are begging the question. I posit that he is aware of it and is presenting it differently than others.
Flatly stating that an entire field of inquiry has, up to the point at which you are publishing your commentary, failed to recognize and consider the ramifications of a phenomenon on which effect and it's ramifications numerous papers have been published in the field for several decades is not being aware of the factual circumstances and presenting them differently than others. Its either ignorance or dishonesty.
And when you compound that by using the mythical failure as a jumping off point for an explanation (“skin in the game”) for the systematic incompetence that seems necessary to explain how the field could fail to address the phenomenon, this compounds the problem.
For example, everyone knows that pocket Aces are the best possible starting hand in poker, and the default strategy is to call any all-in call if you have them; your expected value will always be positive, no matter how many other people call or what they have.
However, there is a well known exception to this rule, and many poker books talk about it - if you are the small stack at the final table, it might make sense to fold your pocket aces pre-flop if more than one larger stack has already called.
The reasoning is that even if you win the hand, you aren't certain to move up in finishing position, which is what determines your winnings. On the other hand, if you lose the all in, you are CERTAIN to not move up. If multiple other people have already called the all-in, then there is a good chance that SOMEONE will be knocked out, moving you up in the finishing position without having to take a risk. Therefore, even if you have a positive expected return, it makes more sense to avoid the 'ruin' situation and fold the pocket aces.