IMHO It was one of the most telling tests I could give a potential trader, not necessarily for quants as this should be considered the quant equivalent of fizzbuzz.
Victor Haghani of Long Term Capital Mangement fame, yep those guys, did a similar experiment with bet sizing and betting on a loaded coin and sadly even people who should have known better, math majors, didn't fare very well in his experiment.
"Mr Taleb argues convincingly that the spectacular collapse in 1998 of Long-Term Capital Management was caused by the inability of the hedge fund's managers to see a world that lay outside their flawed models. And yet those models are still widely used today."
"By many’s reckoning, LTCM both defined and emboldened Nassim Taleb. The episode was a microcosm of everything he eventually stood for and against. His resentment against economists grew. He became convinced that they totally do not understand risk. He saw the fragility of the financial system and how dependent the entire ecosystem is on one another. He became disgusted when LTCM fund managers, despite the damage they have done, walked away from the fiasco relatively unscathed and went on to start more funds. He became the biggest advocate of having Skin in the Game."
A good book has been written about LTCM, but it's not Taleb's; it's Lowenstein's _When Genius Failed_.
How many people took that position before it blew up?
> Victor Haghani (born c. 1962) is an Iranian-American financier, one of the founding partners of Long Term Capital Management (LTCM), a hedge fund which collapsed in 1998 and was eventually bailed out by a consortium of leading banks.
I think Taleb is a bit too eager to criticise others, even when his mathematics isn't actually applicable to the situation.
I find this really unfortunate, because Taleb's message about systems behavior being dominated by tail behavior, and about the risks systems not being designed to tolerate that behavior, are very interesting and broadly applicable. Unfortunately, the medium makes the message less effective.
I often think of Taleb as a sort of counter-example when thinking about how to teach and communicate difficult concepts.
Also, most people are stupid compared to Mr. Taleb, that's just a simple function of his observably high intelligence. That said, I've never seen him treat someone as stupid who doesn't first act stupid, and he's demonstrably willing to engage with people publicly on fair terms.
It's a good test though: do personalities like Taleb and Dijkstra entertain you or do they offend you? Introspecting on why they do or don't is a great opportunity for personal growth.
Much like Mr. Taleb, I just don't see it as a problem. If my personality allows me to appreciate a communication style others have trouble with, without restricting me from appreciating other communications styles that they do not, then I have access to a broader and more diverse set of knowledge and interactions. I find the trade-off acceptable.
true - also a tautology
> justifying the fact that people are more willing to bet with "house money" - this is clearly wrong
I don't think you exactly showed that.
But if I wanted to explain why treating house money differently from your own money was wrong I'd use the example of two people who entered the casino with different amounts of money but now have the same amount (somewhere in between where they each started). Then one is betting with house money and the other isn't. But I claim that it would be rational for them to behave the same way. This is because the consequences of having a certain amount of money will be the same for both of them, no matter where that money came from.
A is losing money they had put aside -- B is losing money he didn't have to begin with.
E.g. A might be playing with borrowed mafia money, or his kids college fund (and risks losing them), whereas B doesn't risk going anywhere below where he was when he entered the casino. If anything, he has a chance to make his winnings even bigger (or at worse, lose them).
In real life (as opposed to thought experiments treating those persons like abstract entities) the origin of those money has a story, so the A=B=100 state is not all that matters to determine the consequences.
You say it as it's some kind of cheating -- instead of enriching the in vitro abstract example with real world impact, and showing why its abstract conclusions don't apply to the real world.
This got me thinking. One thing that is not discussed is that in real life trading one does not know what percentage advantage one has. Many successful traders rely on subconscious processes for part of their trading strategy that they have no way of examining, but seems to work(sometimes). If the only way to find out what your expected return is(your edge), is to bet and see what happens (maybe this is really the reason one needs "skin in the game") then ramping up your betting when you are winning is a really good idea. When you are losing money it is likely your expected return is negative and you should start scaling back your bets until you find an edge again.
It would be interesting to run this code with variable rates of return over time, including negative ones, where one scales the bet size not on your net capital but on (the percentage win on the last n bets)*(capital).
I'm sure this is all in some book written in the 18th century as people have been trading in markets for a long time.
Of course when the Kelly Criterion is known and constant, one can make the optimal bets and get rich, but those situations don't exist in real life (unless there is some kind of monopoly or external force being used to make people take the other (loosing side) of the bet). The iterative thinking of the the Kelly Criterion must be part of a traders mindset, but markets never understood well enough to where this formula can be strictly applied.
A bit strange to see Taleb talk about a casino situation to explain his thinking. Elsewhere he mocks such "casino odds" view of the world as very unrealistic an bemoans that such a view will cause one grief if you use those ideas with "skin in the game".
ps. I've only read "Anti-Fragile" and some of his blog essays.
...Unless you engage in strategies designed by traders and rediscovered by every single surviving trader, very similar to what we call, something called the Kelly Criterion, which is to play with the house money. ...
And later restates:
What Taleb said is we should be more aggressive with the house money.
AFAICT, the Kelly Criterion has nothing to do with playing with house money. Instead, Kelly tells you how much of your wallet to wager given the payout and probability of winning. The source of the money you stake is irrelevant. Dice don't care whether your wager came from the house or your 401k.
For example, in a coin toss biased 60% to heads with a 2x payout for win, you should bet 20% of your wallet on heads and keep doing that no matter which way the flips go (2p - 1 = 2*0.6 - 1 = 0.2). If p ≤ 0.5, don't bet.
Playing this strategy optimizes your return and ensures you can never go bust, thereby missing out on the guaranteed payout over enough bets.
The problem with the comparison is that if you have one trader who started with $20, and another with $100, but they both currently have $50, they will both, by the Kelly criterion, be betting the same amount of money, even though once trader is trading with "house money" and the other is not. They're still trading without memory, which is the point that he was trying to refute, as I understand it.
The idea that one should factor the origin of the money put into a bet is ridiculous. It's your money either way.
> IIRC, the idea of "house money" was invented by earlier researchers who tried to explain why people would bet more after winning.
These guys, I think:
If e.g. your net worth is 10k, you win 500, and suddenly you're twice as willing to bet 500, something is off.
Don't get too tied up in the details of that summary; I'm well aware of just how brutal that summary is. But the details of the summary aren't what's important; what's important is the observation that there are absorbent barriers in philosophy and especially morality. You can't pursue a morality that results in your death, especially if it results in a quick death. There's some others too, but that's biggest and the most obvious. Civilizational death is another one; more abstract, but also a rich vein of observations.
For example, there's a lot more people living people advocating for humanity to be killed off for the planet than dead people advocating for humanity to be killed off for the planet's sake. It is impossible for the living to observe any other outcome.
This does not lead us via simple, universally-agreeable philosophical steps to a universally-agreed upon morality. But it does create distinctions between different moralities and philosophies, and once you admit that into the world, it once again becomes valid to study different philosophies and moralities again without having to accept the postmodernistic "throw your hands in the air and give up ever being able to determine anything, and then go study something else instead". It's still nowhere near a mathematically rigorous world by any means, but it's no longer hopeless.
(Bonus observation 1: No, this is not simply utilitarianism. Consider utilitarianism as a spectrum of "how concerned am I about 'utility'". Utilitarianism is the idea that we should be very far to the side of considering that, if not considering it solely; on a spectrum from 0 to 1, utilitarianism is that we should be close to or on 1. This observation is the complementary observation that we can not be close to or on 0. But we might still be on, say, 0.5, which is not "utilitarianism" but may be enough to create philosophies that still keep us alive. This observation also does not depend on "valuing" life over death, because it is on a far more primal and brutal level than that. It is the factual observation that if you hit one of these absorbing barriers you are no longer a participant in philosophy, not a normative statement about whether that is or is not a good thing.)
(Bonus observation 2: This is the beginning of the path, not the end.)
You're arguing against a strawman understanding of postmodern ethics
If you'd like another example of an "absorbing barrier", an HN comment has a limit on how long it is permitted to be. Therefore, all comments that would be longer than that are comments that can not be made. It is probably not unreasonable to state that even a slightly adequate summary of an entire philosophy would not fit within that limit. Therefore, no Hacker News comment can contain a true summary of an entire philosophy. Therefore it is not reasonable to expect a Hacker News comment to contain an accurate summary of an entire philosophy.
Besides, there's another "absorbing barrier" in that I'm not interested in writing that long of a summary anyhow, when they already exist.
Finally, postmodernism's very nature allows it to be a bit of a moving target, where any time someone makes any criticism of it a postmodernist can say with a straight face that that is not what postmodernism is, and no matter what you point at, nope, that's not where postmodernism is. I reject that. What I said may not be the totality of postmodernism, but it is accurate, in that there are definitely postmodernists who operate under the beliefs I described, and the existence of even a branch of postmodernism that believes what I said is sufficient for the point to be an interesting and valid criticism of those beliefs. Personally it is my considered opinion that this is a foundational belief of the entire philosophy and it falls apart entirely if it is destroyed, and there's just varying levels of how obfuscated they manage to make the fact that this is a foundational belief of theirs, but your mileage may vary.
No, it's still irrational and it doesn't matter if there is repetition of bets or just one available. Depending on your total net worth, living conditions and values you have there are good bets and bad bets. A good bet is a bet which maximizes expected utility of your money. It changes depending on your net worth of course. It doesn't matter if you have just won or lost (this is mental accounting) it matters what's your total net worth is at given time.
>>Actually, what I’m saying is even stronger. I am saying that even if you have the edge, in the presence of the probability of ruin, you will be ruined. Even if you had the edge … If you play long enough.
He wouldn't pass 1st year college math course with that one.
>>Unless you engage in strategies designed by traders and rediscovered by every single surviving trader, very similar to what we call, something called the Kelly Criterion, which is to play with the house money
He doesn't grok what Kelly Criterion is either. Kelly Criterion determines a size of the bet which maximizes utility of money modeled as a logarithmic funciton. That's it. It doesn't matter if there is string of bets available or just one bet. If you have just won or lost doesn't matter either (only your total net worth at given point).
>>And this is called, playing with the market money or playing with the house money.
It's called being confused about basic math and economic concepts and then criticizing people who worked on those.
Typical Taleb: confused, wrong, having no new insights to offer but making up for those with new terms (absorbent barriers, really?) and a lot of words.
> He wouldn't pass 1st year college math course with that one.
Seems pretty straightforward to me. Let's say you play a game with an edge where you win 2x your money 99% of the time. But if you lose once, you lose all your money (1% risk of ruin). If you play this game 70 times, you are very likely to end up ruined.
It’s a lot like smoking. One cigarette won’t kill you, but add up the probability cigarette after cigarette, pack after pack, year after year and the probability starts to skyrocket.
For example, imagine a stranger comes up to you on the street and offers you a coin game, where if you win you gain $1 billion, but if you lose you owe him $1 million. The simplified model would claim that you should play the game because the expected value is positive. A more nuanced model would understand that being that much in debt would be a lot more suffering for me than the offset from whatever I'd do with a billion dollars. Even if the coin was weighted more in my favor, and the penalty was less, an even more nuanced model would recognize that I'm probably doing OK enough in my life with the money I have and that anything to chip away at that is an unnecessary risk.
And then a model that even more completely captures the interplay of reasoning that humans do would acknowledge "wait a second, who the hell is this random stranger offering me a devil's bargain?" The researcher with a simple model would say "oh, just ignore that, just take the situation at face value," but that's never how any decision is made. Our priors for "suspicious people offering bargains too good to be true are actually trying to cheat you somehow even if you don't know how" is very high. This is embedded in cultural/religious parables that teach "you do not make deals with clever demons," and because the culture that taught that has survived and persisted over time, it's probably a good prior to hold onto.
All that complexity is lost if you only think in simple terms of expected value and single-iteration games.
No, it wouldn't. Humans are not maximizing EV of money won but utility of that money. This is fundamental to economic models which Taleb is criticizing.
>>All that complexity is lost if you only think in simple terms of expected value and single-iteration games.
It isn't lost unless you are for some strange reason maximizing expected amount of money which no one sane does (even people who don't understand what expected value is in the first place). If the game is single iteration or multi-iteration doesn't matter either, you just make the best play at every point and it doesn't change no matter if you will get a chanc to make another bet or not.
And then whenever they do a study they are very careful to fully convince subjects that there is no trickery going on. And then when interviewed the subjects don't say "I suspected trickery". So I think that such studies are justified in concluding that the subjects are acting irrationally.
>Our priors for "suspicious people offering bargains too good to be true are actually trying to cheat you somehow even if you don't know how" is very high. This is embedded in cultural/religious parables that teach "you do not make deals with clever demons," and because the culture that taught that has survived and persisted over time, it's probably a good prior to hold onto.
Right, right. But the question of "why do people make irrational decisions?" is exactly what the researchers are studying. They agree with you! The answer is that decisions which are irrational in one domain would be rational in the domain we were designed for.
No one sane plays a game like that with all their money on the line. This is not because there are future bets available but simply expected utility of such a bet is negative. If you play for example with constant sizing your risk of ruin is higher than 0 but lower than 1. It's very often very close to 0.
>>It’s a lot like smoking. One cigarette won’t kill you, but add up the probability cigarette after cigarette, pack after pack, year after year and the probability starts to skyrocket.
But it's not like that. Risk of ruin doesn't skyrocket even if your plan is to make infinitely many bets. That is unless you do something very silly like double the bet amount every time you win.
For more information check "Risk of Ruin" https://en.wikipedia.org/wiki/Risk_of_ruin
Using a cell phone is taking that bet. You are betting that 50 years of cell phone exposure won't cause brain cancer. Nobody knows for sure yet, nobody has had cell phones for 50 years. I'm not saying it will or it won't cause cancer. I don't know. But it has the possibility of risk of ruin.
Second, constant amount betting does have risk of ruin even with an edge. Take a fair coin flip where you start with $500 and get $51 for heads and lose $50 for tails. A clear edge with a constant betting amount. In 100 flips, you have a 27.23% chance of ruin.
For more information consult the concept of "Gambler's ruin" https://en.wikipedia.org/wiki/Gambler%27s_ruin
It's essentially the game we played with the housing bubble.
Already mentioned in another comment, see LTCM as a prime example:
If you play 70 times, you have a 1-0.99^70 = 50.5% probability of ruin.
It's good you brought up Markov chains; I can't believe they haven't been mentioned in the article or the comments
Absolutely true, but consider what log utility also creates a decision rule that leads to growth-rate optimization under multiplicative bets.
You may find these notes from Ole Peters interesting https://ergodicityeconomics.files.wordpress.com/2017/03/ergo...
E.g., imagine a bet that costs $1 to play and pays out $2 with probability 60% and $0 with probability 1%. How much would you bet? The expected value of the bet is $1.2 per dollar you bet, so for a single bet, you might wager 100% of your bankroll. But "in the long run" you'll loose all your money doing this. Instead, Kelly would recommend that you bet only 20% of your bankroll. "In the long run", you'll make infinite money doing this. (Not only that, but there's no other strategy that will make you money faster.)
That's not true. If you bet 99% of your money each time then there's still no probability that you go bankrupt (it's literally impossible to go bankrupt unless you bet all your money), and you make money much faster.
Perhaps we could add in a lower bound, like you have to stop betting if you have less than $1. But then it's possible to go bankrupt even if you use the Kelly criterion. Furthermore we've introduced a fixed quantity into the problem, which means there's no longer any justification for saying that your bet should be the same proportion of your wealth every turn.
I've never yet seen a convincing argument for Kelly betting aside from the when utility is logarithmic.
Go back and read through the Math for the Kelly Criterion - when you know your edge and odds, it's the optimal solution. It's basically the balance point between taking advantage of current betting opportunities and preserving capital to take advantage of future betting opportunities.
If you bet 99% of your money on a coin flip, you'll eventually lose a flip and have too little money to take advantage of future coin flips.
Let me try another explanation: your return from a series of coinflips comes from two sources. The first is the return from the next coin flip, which when you have an edge, makes you want to bet as much as possible on this flip. The second is the return from all future flips, which makes you want to bet less so that a poor result doesn't permanently diminish your ability to make bets. Mathematically, the Kelley Criterion is the point where adding or removing bet sizing moves these two values the same amount, resulting in a change in expected value per bet size of zero, which means it's a maximum.
You are comparing Kelly bets to being stupid so of course Kelly wins. Kelly maximizes just one thing - log of bankroll. If your utility is not logarithmic it's not optimal to use Kelly bet sizings and if your utility is logarithmic with some multiplier then you need to adjust Kelly as well (which btw gamblers using Kelly are doing as pure Kelly criterion is universally considered too risky).
>>The first is the return from the next coin flip, which when you have an edge, makes you want to bet as much as possible on this flip. The second is the return from all future flips, which makes you want to bet less so that a poor result doesn't permanently diminish your ability to make bets.
While pure result diminish (or kills) your ability to make money from further bet betting it all and winning increases it. If you want to maximize EV of total amount of money you bet all at every turn and this is the optimal solution to optimize that. If you want to optimize logarithm of total amount of money you bet Kelly. If you want to maximize more conservative utility then you bet something else. There is nothing magical about Kelly criterion other than that.
Of course in the cases where Bob has more money he might have much more money, so this fact isn't very relevant to them unless they have appropriate utility functions.
Another thing that occurs to me is that your utility function is changed by the opportunities you expect to encounter. If your utility function for money would normally be U_0, and you are about to be allowed to make a bunch of bets, then your current utility function, U_-1, is equal to the expectation of U_0 under the probability distribution that results from you betting optimally starting with however much money you have.
Maybe there's a family of utility functions for which if U_0 is in that family then U_-1 is approximately logarithmic? Then that would be a good justification for using the Kelly criterion if you have a long string of bets ahead of you. On the other hand I just checked the HARA (https://en.wikipedia.org/wiki/Hyperbolic_absolute_risk_avers...) family of utility functions, and they're all stable under the process I described. So there are certainly a lot of functions that don't become logarithmic.
The huge mistake you're making is in assuming that money is infinitely divisible. Let's say you start with $1 and you bet 99 cents and lose. Now try betting 99% of 1 cent and see if you don't go bankrupt.
It's very simple to verify Kelly's findings by building a naive simulator.
But then it's possible to go bankrupt even if you use the Kelly criterion.
I say theoretical because unless you're in a casino, you don't know the actual odds and edge you are playing with (because they move) which makes your Kelly ideal bet size a guess rather than a fixed number.
Here's more information: https://en.wikipedia.org/wiki/Kelly_criterion
Taleb's merits are more literally than substantial. He's - and I hesitate to say this - somewhat of a charlatan. He targets an audience that considers itself smart, casts experts outside as idiots or malicious hucksters (but not his readers of course, who, with his help, see through the ruse), uses a few TedX-ie rethorics to present a supposed alternative revolutionary idea (like using big words, or using unexpected multi-disciplinary comparisons), and all done with supreme confidence.
As long as he's selling books and talks, it's fine and fun, but I would never ask him to look after my laptop when going to the bathroom in Starbucks.
He literally addresses that point in his book "Antifragile." The point he was wanting to make is that the systems he talks about are not merely resilient or resistant to stress or damage -- but rather they are systems that get more robust or strong as a direct result of damage or stress. That's the property that is harder to find a nice concise English word for.