Hacker News new | past | comments | ask | show | jobs | submit login
Nassim Taleb, Absorbent Barriers and House Money (medium.com/ml-everything)
149 points by bko on March 7, 2018 | hide | past | favorite | 74 comments



Wow, the work he did on simulating his payout is almost exactly what I used to have people do as part of our interviewing.

IMHO It was one of the most telling tests I could give a potential trader, not necessarily for quants as this should be considered the quant equivalent of fizzbuzz.

Victor Haghani of Long Term Capital Mangement fame, yep those guys, did a similar experiment with bet sizing and betting on a loaded coin and sadly even people who should have known better, math majors, didn't fare very well in his experiment.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2856963


Taleb has a lot of disdain for the LTCM guys:

"Mr Taleb argues convincingly that the spectacular collapse in 1998 of Long-Term Capital Management was caused by the inability of the hedge fund's managers to see a world that lay outside their flawed models. And yet those models are still widely used today."

"By many’s reckoning, LTCM both defined and emboldened Nassim Taleb. The episode was a microcosm of everything he eventually stood for and against. His resentment against economists grew. He became convinced that they totally do not understand risk. He saw the fragility of the financial system and how dependent the entire ecosystem is on one another. He became disgusted when LTCM fund managers, despite the damage they have done, walked away from the fiasco relatively unscathed and went on to start more funds. He became the biggest advocate of having Skin in the Game."


Disdain for LTCM isn't exactly an iconoclastic, or even interesting, position to take. They're among the worst trading catastrophes in history.

A good book has been written about LTCM, but it's not Taleb's; it's Lowenstein's _When Genius Failed_.


> Disdain for LTCM isn't exactly an iconoclastic, or even interesting, position to take.

How many people took that position before it blew up?


Famously, Taleb is supposed to have predicted LTCM. I've read FBR and Swan, but not recently, and took a couple minutes to try to track down the details of this prediction, with little luck. I found numerous references to his having predicted the demise of hedge funds "like" LTCM (my clock will also strike 4:23 at least twice in the next 24 hours). Does anyone have a good cite for this?


I'm not sure he called them out by name. If you read `Against VAR` [1] from 1997 (LTCM blew up in 1998) you see basically all the ideas of The Black Swan / Fooled By Randomness. LTCM was doing all the things that he says are risky. The guy who wrote "In Defense of VAR" on that same page subsequently wrote "Risk Management Lessons from Long-Term Capital Management"[2]. Also in Taleb's 1997 book "Dynamic Hedging", chapter 9 "Vega and the Volatility Surface" talks about liquidity as one of the pre-eminent risks of an option book (as opposed to individual option positions).

[1] http://www.derivativesstrategy.com/magazine/archive/1997/049...

[2] https://merage.uci.edu/~jorion/papers/ltcm.pdf


I looked up the first author on Wikipedia:

> Victor Haghani (born c. 1962[1]) is an Iranian-American financier,[2] one of the founding partners of Long Term Capital Management (LTCM), a hedge fund which collapsed in 1998 and was eventually bailed out by a consortium of leading banks.


What Taleb shows is that people should become more willing to bet as they move away from the "absorbing barrier" of having so little money that they have to quit the game. But he phrases this in terms of justifying the fact that people are more willing to bet with "house money" (money that they have already won) than their own. This is clearly wrong because the starting point is always higher than the absorbing barrier.

I think Taleb is a bit too eager to criticise others, even when his mathematics isn't actually applicable to the situation.


"A bit too eager to criticize others" is a understatement. Taleb has a lot of interesting things to say, and some of them are even reasonably novel. Unfortunately his general "everybody but me is stupid" approach makes his message much less useful and accessible than it otherwise would be. This is a good example of that problem.

I find this really unfortunate, because Taleb's message about systems behavior being dominated by tail behavior, and about the risks systems not being designed to tolerate that behavior, are very interesting and broadly applicable. Unfortunately, the medium makes the message less effective.

I often think of Taleb as a sort of counter-example when thinking about how to teach and communicate difficult concepts.


Contrariwise, many people find his communication style refreshing and entertaining with his book sales being a solid exhibit in favor of that.

Also, most people are stupid compared to Mr. Taleb, that's just a simple function of his observably high intelligence. That said, I've never seen him treat someone as stupid who doesn't first act stupid, and he's demonstrably willing to engage with people publicly on fair terms.

It's a good test though: do personalities like Taleb and Dijkstra entertain you or do they offend you? Introspecting on why they do or don't is a great opportunity for personal growth.


I suspect that you may not be aware of this, so I'm commenting (partially with the hope that someone will figure out a nicer, more effective way to say this): You come across as condescending in a similarly offputting way to Taleb himself.


I do take my own advice about introspection and thank you for your concern.

Much like Mr. Taleb, I just don't see it as a problem. If my personality allows me to appreciate a communication style others have trouble with, without restricting me from appreciating other communications styles that they do not, then I have access to a broader and more diverse set of knowledge and interactions. I find the trade-off acceptable.


He's such a thin-skinned snowflake. I enjoyed The Black Swan but didn't know much about him as a person. He should give up twitter, or stop reading reviews. His fans are hilarious too: he's created a whole cult of redundant pedants.


>whole cult of redundant pedants. aspiring despots


The long tails and the lack of mean of important distributions is essential. However, we are losing those concepts because of his belligerance. A pity.


> the starting point is always higher than the absorbing barrier.

true - also a tautology

> justifying the fact that people are more willing to bet with "house money" - this is clearly wrong

I don't think you exactly showed that.


I think I showed why his justification is wrong.

But if I wanted to explain why treating house money differently from your own money was wrong I'd use the example of two people who entered the casino with different amounts of money but now have the same amount (somewhere in between where they each started). Then one is betting with house money and the other isn't. But I claim that it would be rational for them to behave the same way. This is because the consequences of having a certain amount of money will be the same for both of them, no matter where that money came from.


As someone that makes casino games and is an active gambler, I have to disagree with this assessment. The amount of money a person initially brings to the casino is often a function of many variables of their life outside the scope of a casino visit. I've gone to the casino with $500 (my ATM limit) alongside someone who brought $5,000 only to see us both end up with $1,000 in our respective pockets. The different variables that caused us to bring the initial sums definitely influenced our behavior once the values equalized. Income, assets, risk adverseness, etc. that all influence the initial sum still come into effect later on when the sums equalize.


So Taleb says the function that controls how you spend your money is a path function and you say that it is a state function. does anyone know how this could be tested? or does anyone have any good gedankenexperiments to illustrate the difference that they would like to share?


I'm not claiming that they will make the same bets, just that it would be in their interest to do so.


so the sunk cost fallacy is a similar case. rationally, a budget is a state function, but people tend to think of it as a path function. thanks for explaining.


Why would it be rational for them to behave the same way?


Because each of their actions have exactly the same consequences.


So A enters the casino with $100 and B enters the casino with $50 -- he then proceeds to double them, whereas A remains steady. So, now both have $100, and B plays with just the house money.

A is losing money they had put aside -- B is losing money he didn't have to begin with.

E.g. A might be playing with borrowed mafia money, or his kids college fund (and risks losing them), whereas B doesn't risk going anywhere below where he was when he entered the casino. If anything, he has a chance to make his winnings even bigger (or at worse, lose them).

In real life (as opposed to thought experiments treating those persons like abstract entities) the origin of those money has a story, so the A=B=100 state is not all that matters to determine the consequences.


Sure. By altering external factors we can make the agents do whatever we want. Perhaps the fact that B only brought $50 indicates his frugal nature, and suggests that he also has a better financial situation outside the casino.


>Sure. By altering external factors we can make the agents do whatever we want.

You say it as it's some kind of cheating -- instead of enriching the in vitro abstract example with real world impact, and showing why its abstract conclusions don't apply to the real world.


The interesting thing I got out of this analysis is that, if you have a slight edge of 1%, having the constant percentage betting strategy is much better, if you have a large (100,000) number of bets. Having a log scale would be nice to see approximately where the two strategies returns diverge.

This got me thinking. One thing that is not discussed is that in real life trading one does not know what percentage advantage one has. Many successful traders rely on subconscious processes for part of their trading strategy that they have no way of examining, but seems to work(sometimes). If the only way to find out what your expected return is(your edge), is to bet and see what happens (maybe this is really the reason one needs "skin in the game") then ramping up your betting when you are winning is a really good idea. When you are losing money it is likely your expected return is negative and you should start scaling back your bets until you find an edge again.

It would be interesting to run this code with variable rates of return over time, including negative ones, where one scales the bet size not on your net capital but on (the percentage win on the last n bets)*(capital).

I'm sure this is all in some book written in the 18th century as people have been trading in markets for a long time.


What a lot of traders do is use 50% of the Kelly limit to cover this risk. At worst by staying under the Kelly limit you reduce your return, while if you exceed it you will blow up at some point.


Don't know about 18th century, but some what similar strategy is already mentioned by Taleb in the interview: Kelly Criterion.


I thought what I was saying was quite a bit different. If one is using the Kelly Criterion, one needs to know what the probability of winning and odds paid for a win. Traders generally don't know these at all and only can find them out by making the trade and seeing what happens.

Of course when the Kelly Criterion is known and constant, one can make the optimal bets and get rich, but those situations don't exist in real life (unless there is some kind of monopoly or external force being used to make people take the other (loosing side) of the bet). The iterative thinking of the the Kelly Criterion must be part of a traders mindset, but markets never understood well enough to where this formula can be strictly applied.

A bit strange to see Taleb talk about a casino situation to explain his thinking. Elsewhere he mocks such "casino odds" view of the world as very unrealistic an bemoans that such a view will cause one grief if you use those ideas with "skin in the game".

ps. I've only read "Anti-Fragile" and some of his blog essays.


The author quotes Taleb:

...Unless you engage in strategies designed by traders and rediscovered by every single surviving trader, very similar to what we call, something called the Kelly Criterion, which is to play with the house money. ...

And later restates:

What Taleb said is we should be more aggressive with the house money.

AFAICT, the Kelly Criterion has nothing to do with playing with house money. Instead, Kelly tells you how much of your wallet to wager given the payout and probability of winning. The source of the money you stake is irrelevant. Dice don't care whether your wager came from the house or your 401k.

For example, in a coin toss biased 60% to heads with a 2x payout for win, you should bet 20% of your wallet on heads and keep doing that no matter which way the flips go (2p - 1 = 2*0.6 - 1 = 0.2). If p ≤ 0.5, don't bet.

https://en.wikipedia.org/wiki/Kelly_criterion

https://www.youtube.com/watch?v=d4yzXbdq2DA

Playing this strategy optimizes your return and ensures you can never go bust, thereby missing out on the guaranteed payout over enough bets.


I believe the point he is making about "house money" and the Kelly Criterion is the size of the bet corresponds to how far ahead you are -- as you get farther ahead, your bets will get larger.

The problem with the comparison is that if you have one trader who started with $20, and another with $100, but they both currently have $50, they will both, by the Kelly criterion, be betting the same amount of money, even though once trader is trading with "house money" and the other is not. They're still trading without memory, which is the point that he was trying to refute, as I understand it.


Is he actually arguing that "house money" is different from "your money" or is he just reusing existing terminology? IIRC, the idea of "house money" was invented by earlier researchers who tried to explain why people would bet more after winning. Obviously "keeping score of your wealth" explains the effect just as well as "playing with house money".


I'm using the definition of house in wide use: the profits you make on gambling or a trade. See:

https://www.quora.com/What-does-Im-playing-with-house-money-...

The idea that one should factor the origin of the money put into a bet is ridiculous. It's your money either way.

> IIRC, the idea of "house money" was invented by earlier researchers who tried to explain why people would bet more after winning.

These guys, I think:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1424076


I see too many of these. Write code to simulate something and obtain an inconclusive version of a result you can simply derive and prove definitively using statistics.


I believe the response to this is that the actual increased willingness to bet is way higher than could possibly be justified by the paltry amounts won.

If e.g. your net worth is 10k, you win 500, and suddenly you're twice as willing to bet 500, something is off.


This idea has a lot of legs if you examine it. One of the more interesting points I've heard from Jordan Peterson in one of his criticisms of postmodernism is this: Postmodernism observes that after centuries of trying, we don't seem to have an objective standard for ethics or philosophy (or at least one that we can all agree on). Postmodernism, to the extent that one can refer to the whole movement in a short sentence, then tends to assume that therefore there are no distinctions between moral frames or perspectives, and from there proceeds to go in a different direction and analyzes everything simply in terms of power, because they decided that's the only thing that matters since moralities and philosophies otherwise have no meaning.

Don't get too tied up in the details of that summary; I'm well aware of just how brutal that summary is. But the details of the summary aren't what's important; what's important is the observation that there are absorbent barriers in philosophy and especially morality. You can't pursue a morality that results in your death, especially if it results in a quick death. There's some others too, but that's biggest and the most obvious. Civilizational death is another one; more abstract, but also a rich vein of observations.

For example, there's a lot more people living people advocating for humanity to be killed off for the planet than dead people advocating for humanity to be killed off for the planet's sake. It is impossible for the living to observe any other outcome.

This does not lead us via simple, universally-agreeable philosophical steps to a universally-agreed upon morality. But it does create distinctions between different moralities and philosophies, and once you admit that into the world, it once again becomes valid to study different philosophies and moralities again without having to accept the postmodernistic "throw your hands in the air and give up ever being able to determine anything, and then go study something else instead". It's still nowhere near a mathematically rigorous world by any means, but it's no longer hopeless.

(Bonus observation 1: No, this is not simply utilitarianism. Consider utilitarianism as a spectrum of "how concerned am I about 'utility'". Utilitarianism is the idea that we should be very far to the side of considering that, if not considering it solely; on a spectrum from 0 to 1, utilitarianism is that we should be close to or on 1. This observation is the complementary observation that we can not be close to or on 0. But we might still be on, say, 0.5, which is not "utilitarianism" but may be enough to create philosophies that still keep us alive. This observation also does not depend on "valuing" life over death, because it is on a far more primal and brutal level than that. It is the factual observation that if you hit one of these absorbing barriers you are no longer a participant in philosophy, not a normative statement about whether that is or is not a good thing.)

(Bonus observation 2: This is the beginning of the path, not the end.)


> postmodernistic "throw your hands in the air and give up ever being able to determine anything, and then go study something else instead"

You're arguing against a strawman understanding of postmodern ethics


Your criticism would sting more if I hadn't labeled it as such.

If you'd like another example of an "absorbing barrier", an HN comment has a limit on how long it is permitted to be. Therefore, all comments that would be longer than that are comments that can not be made. It is probably not unreasonable to state that even a slightly adequate summary of an entire philosophy would not fit within that limit. Therefore, no Hacker News comment can contain a true summary of an entire philosophy. Therefore it is not reasonable to expect a Hacker News comment to contain an accurate summary of an entire philosophy.

Besides, there's another "absorbing barrier" in that I'm not interested in writing that long of a summary anyhow, when they already exist.

Finally, postmodernism's very nature allows it to be a bit of a moving target, where any time someone makes any criticism of it a postmodernist can say with a straight face that that is not what postmodernism is, and no matter what you point at, nope, that's not where postmodernism is. I reject that. What I said may not be the totality of postmodernism, but it is accurate, in that there are definitely postmodernists who operate under the beliefs I described, and the existence of even a branch of postmodernism that believes what I said is sufficient for the point to be an interesting and valid criticism of those beliefs. Personally it is my considered opinion that this is a foundational belief of the entire philosophy and it falls apart entirely if it is destroyed, and there's just varying levels of how obfuscated they manage to make the fact that this is a foundational belief of theirs, but your mileage may vary.


What postmodernist books have you read?


>>So, if you look at the world as repetition of bets, under condition of survival, then mental accounting is not only not irrational but is necessary. Any other strategy would be effectively irrational.

No, it's still irrational and it doesn't matter if there is repetition of bets or just one available. Depending on your total net worth, living conditions and values you have there are good bets and bad bets. A good bet is a bet which maximizes expected utility of your money. It changes depending on your net worth of course. It doesn't matter if you have just won or lost (this is mental accounting) it matters what's your total net worth is at given time.

>>Actually, what I’m saying is even stronger. I am saying that even if you have the edge, in the presence of the probability of ruin, you will be ruined. Even if you had the edge … If you play long enough.

He wouldn't pass 1st year college math course with that one.

>>Unless you engage in strategies designed by traders and rediscovered by every single surviving trader, very similar to what we call, something called the Kelly Criterion, which is to play with the house money

He doesn't grok what Kelly Criterion is either. Kelly Criterion determines a size of the bet which maximizes utility of money modeled as a logarithmic funciton. That's it. It doesn't matter if there is string of bets available or just one bet. If you have just won or lost doesn't matter either (only your total net worth at given point).

>>And this is called, playing with the market money or playing with the house money.

It's called being confused about basic math and economic concepts and then criticizing people who worked on those.

Typical Taleb: confused, wrong, having no new insights to offer but making up for those with new terms (absorbent barriers, really?) and a lot of words.


>> Actually, what I’m saying is even stronger. I am saying that even if you have the edge, in the presence of the probability of ruin, you will be ruined. Even if you had the edge … If you play long enough.

> He wouldn't pass 1st year college math course with that one.

Seems pretty straightforward to me. Let's say you play a game with an edge where you win 2x your money 99% of the time. But if you lose once, you lose all your money (1% risk of ruin). If you play this game 70 times, you are very likely to end up ruined.

It’s a lot like smoking. One cigarette won’t kill you, but add up the probability cigarette after cigarette, pack after pack, year after year and the probability starts to skyrocket.


Exactly. I've often felt that a lot of the "wow humans are so irrationally risk-averse when it comes to money" conclusions that some researchers make are suffering from limitations of extremely limited models.

For example, imagine a stranger comes up to you on the street and offers you a coin game, where if you win you gain $1 billion, but if you lose you owe him $1 million. The simplified model would claim that you should play the game because the expected value is positive. A more nuanced model would understand that being that much in debt would be a lot more suffering for me than the offset from whatever I'd do with a billion dollars. Even if the coin was weighted more in my favor, and the penalty was less, an even more nuanced model would recognize that I'm probably doing OK enough in my life with the money I have and that anything to chip away at that is an unnecessary risk.

And then a model that even more completely captures the interplay of reasoning that humans do would acknowledge "wait a second, who the hell is this random stranger offering me a devil's bargain?" The researcher with a simple model would say "oh, just ignore that, just take the situation at face value," but that's never how any decision is made. Our priors for "suspicious people offering bargains too good to be true are actually trying to cheat you somehow even if you don't know how" is very high. This is embedded in cultural/religious parables that teach "you do not make deals with clever demons," and because the culture that taught that has survived and persisted over time, it's probably a good prior to hold onto.

All that complexity is lost if you only think in simple terms of expected value and single-iteration games.


>>The simplified model would claim that you should play the game because the expected value is positive.

No, it wouldn't. Humans are not maximizing EV of money won but utility of that money. This is fundamental to economic models which Taleb is criticizing.

>>All that complexity is lost if you only think in simple terms of expected value and single-iteration games.

It isn't lost unless you are for some strange reason maximizing expected amount of money which no one sane does (even people who don't understand what expected value is in the first place). If the game is single iteration or multi-iteration doesn't matter either, you just make the best play at every point and it doesn't change no matter if you will get a chanc to make another bet or not.


I think you're giving researchers too little credit in your first example. They do at least know that rational agents maximise expected utility, not just expected money.

And then whenever they do a study they are very careful to fully convince subjects that there is no trickery going on. And then when interviewed the subjects don't say "I suspected trickery". So I think that such studies are justified in concluding that the subjects are acting irrationally.

>Our priors for "suspicious people offering bargains too good to be true are actually trying to cheat you somehow even if you don't know how" is very high. This is embedded in cultural/religious parables that teach "you do not make deals with clever demons," and because the culture that taught that has survived and persisted over time, it's probably a good prior to hold onto.

Right, right. But the question of "why do people make irrational decisions?" is exactly what the researchers are studying. They agree with you! The answer is that decisions which are irrational in one domain would be rational in the domain we were designed for.


>>Seems pretty straightforward to me. Let's say you play a game with an edge where you win 2x your money 99% of the time. But if you lose once, you lose all your money (1% risk of ruin). If you play this game 70 times, you are very likely to end up ruined

No one sane plays a game like that with all their money on the line. This is not because there are future bets available but simply expected utility of such a bet is negative. If you play for example with constant sizing your risk of ruin is higher than 0 but lower than 1. It's very often very close to 0.

>>It’s a lot like smoking. One cigarette won’t kill you, but add up the probability cigarette after cigarette, pack after pack, year after year and the probability starts to skyrocket.

But it's not like that. Risk of ruin doesn't skyrocket even if your plan is to make infinitely many bets. That is unless you do something very silly like double the bet amount every time you win.


First of all, some people do play games like that. But put that aside for a moment because it’s immaterial. Look at what Taleb is essentially saying: if there is a chance you will be ruined, and you play long enough, you will eventually be ruined. Even if that chance is 1%, probabilities add over time. 1.0170 = 2. If the chance of ruin is 1% then after 70 plays the chance of ruin is 100%.


What kind of math is that? In your contrived example it should be 1 - .99^70. It's hard not to question your background in probability when you're involving 1.01 and 2..


Nice ad hominem attempt, but even though I accidentally switched a minus for a plus, the point the math teaches is the same: risk of ruin accumulates with the number of bets.

For more information check "Risk of Ruin" https://en.wikipedia.org/wiki/Risk_of_ruin


You are assuming people are betting their whole bankroll on one bet. No one remotely sane does that. If you do something like betting constant amount then risk of ruin is often very close to 0 (and always less than 1 if you have an edge).


You are wrong twice there... sane people make that bet every day by smoking (limited upside gain, risk of ruin loss) and by taking many prescription medicines that have side effects including death.

Using a cell phone is taking that bet. You are betting that 50 years of cell phone exposure won't cause brain cancer. Nobody knows for sure yet, nobody has had cell phones for 50 years. I'm not saying it will or it won't cause cancer. I don't know. But it has the possibility of risk of ruin.

Second, constant amount betting does have risk of ruin even with an edge. Take a fair coin flip where you start with $500 and get $51 for heads and lose $50 for tails. A clear edge with a constant betting amount. In 100 flips, you have a 27.23% chance of ruin.

For more information consult the concept of "Gambler's ruin" https://en.wikipedia.org/wiki/Gambler%27s_ruin


How much should you bet to never be ruined but maximize the long term expected value of your bets?



>> No one sane plays a game like that with all their money on the line.

It's essentially the game we played with the housing bubble.


>> No one sane plays a game like that with all their money on the line.

Already mentioned in another comment, see LTCM as a prime example:

https://en.wikipedia.org/wiki/Long-Term_Capital_Management


> If you play this game 70 times, you are very likely to end up ruined.

If you play 70 times, you have a 1-0.99^70 = 50.5% probability of ruin.


I totally agree with your summary, but to be fair to Taleb "absorbent barrier" is a real term from the study of Markov Chains. (https://en.wikipedia.org/wiki/Markov_chain#Absorbing_states)


The term is absorbing barrier. Synonymous words, but nobody says absorbent barrier.


The original audio isn't actually good enough to tell which of these he's saying.


Even then, he did twist 'absorbing states' into the basically unused "absorbent barrier."

It's good you brought up Markov chains; I can't believe they haven't been mentioned in the article or the comments


"Kelly maximizes utility of money modeled as a logarithmic function."

Absolutely true, but consider what log utility also creates a decision rule that leads to growth-rate optimization under multiplicative bets.

You may find these notes from Ole Peters interesting https://ergodicityeconomics.files.wordpress.com/2017/03/ergo...


One interpretation of Kelly is maximizing the e.v. of the log return. Another (equivalent) interpretation is maximizing the expected IRR -- which is where the "in the long run" comment comes in, since "in the long run" you care more about the internal rate of return than the expected value of any individual bet.

E.g., imagine a bet that costs $1 to play and pays out $2 with probability 60% and $0 with probability 1%. How much would you bet? The expected value of the bet is $1.2 per dollar you bet, so for a single bet, you might wager 100% of your bankroll. But "in the long run" you'll loose all your money doing this. Instead, Kelly would recommend that you bet only 20% of your bankroll. "In the long run", you'll make infinite money doing this. (Not only that, but there's no other strategy that will make you money faster.)


>(Not only that, but there's no other strategy that will make you money faster.)

That's not true. If you bet 99% of your money each time then there's still no probability that you go bankrupt (it's literally impossible to go bankrupt unless you bet all your money), and you make money much faster.

Perhaps we could add in a lower bound, like you have to stop betting if you have less than $1. But then it's possible to go bankrupt even if you use the Kelly criterion. Furthermore we've introduced a fixed quantity into the problem, which means there's no longer any justification for saying that your bet should be the same proportion of your wealth every turn.

I've never yet seen a convincing argument for Kelly betting aside from the when utility is logarithmic.


>If you bet 99% of your money each time then there's still no probability that you go bankrupt (it's literally impossible to go bankrupt unless you bet all your money), and you make money much faster.

Go back and read through the Math for the Kelly Criterion - when you know your edge and odds, it's the optimal solution. It's basically the balance point between taking advantage of current betting opportunities and preserving capital to take advantage of future betting opportunities.

If you bet 99% of your money on a coin flip, you'll eventually lose a flip and have too little money to take advantage of future coin flips.

Let me try another explanation: your return from a series of coinflips comes from two sources. The first is the return from the next coin flip, which when you have an edge, makes you want to bet as much as possible on this flip. The second is the return from all future flips, which makes you want to bet less so that a poor result doesn't permanently diminish your ability to make bets. Mathematically, the Kelley Criterion is the point where adding or removing bet sizing moves these two values the same amount, resulting in a change in expected value per bet size of zero, which means it's a maximum.


>>If you bet 99% of your money on a coin flip, you'll eventually lose a flip and have too little money to take advantage of future coin flips.

You are comparing Kelly bets to being stupid so of course Kelly wins. Kelly maximizes just one thing - log of bankroll. If your utility is not logarithmic it's not optimal to use Kelly bet sizings and if your utility is logarithmic with some multiplier then you need to adjust Kelly as well (which btw gamblers using Kelly are doing as pure Kelly criterion is universally considered too risky).

>>The first is the return from the next coin flip, which when you have an edge, makes you want to bet as much as possible on this flip. The second is the return from all future flips, which makes you want to bet less so that a poor result doesn't permanently diminish your ability to make bets.

While pure result diminish (or kills) your ability to make money from further bet betting it all and winning increases it. If you want to maximize EV of total amount of money you bet all at every turn and this is the optimal solution to optimize that. If you want to optimize logarithm of total amount of money you bet Kelly. If you want to maximize more conservative utility then you bet something else. There is nothing magical about Kelly criterion other than that.


I guess one thing you can say about the Kelly criterion without mentioning utility is that if Alice uses the Kelly criterion and Bob uses some other strategy (which is still of the form "bet some fixed proportion of your money each turn") then the probability that Alice has more money than Bob tends to 1 as the number of turns increases.

Of course in the cases where Bob has more money he might have much more money, so this fact isn't very relevant to them unless they have appropriate utility functions.

Another thing that occurs to me is that your utility function is changed by the opportunities you expect to encounter. If your utility function for money would normally be U_0, and you are about to be allowed to make a bunch of bets, then your current utility function, U_-1, is equal to the expectation of U_0 under the probability distribution that results from you betting optimally starting with however much money you have.

Maybe there's a family of utility functions for which if U_0 is in that family then U_-1 is approximately logarithmic? Then that would be a good justification for using the Kelly criterion if you have a long string of bets ahead of you. On the other hand I just checked the HARA (https://en.wikipedia.org/wiki/Hyperbolic_absolute_risk_avers...) family of utility functions, and they're all stable under the process I described. So there are certainly a lot of functions that don't become logarithmic.


Thanks for the interesting comment and a link. I will be using your the first paragraph from your post in the future :)


> That's not true. If you bet 99% of your money each time then there's still no probability that you go bankrupt (it's literally impossible to go bankrupt unless you bet all your money), and you make money much faster.

The huge mistake you're making is in assuming that money is infinitely divisible. Let's say you start with $1 and you bet 99 cents and lose. Now try betting 99% of 1 cent and see if you don't go bankrupt.

It's very simple to verify Kelly's findings by building a naive simulator.


>The huge mistake you're making is in assuming that money is infinitely divisible. Let's say you start with $1 and you bet 99 cents and lose. Now try betting 99% of 1 cent and see if you don't go bankrupt.

But then it's possible to go bankrupt even if you use the Kelly criterion.


Kelly criterion never claimed to prevent bankruptcy. It only tells you how to bet to achieve the maximum theoretical return possible given a set of odds and payouts.

I say theoretical because unless you're in a casino, you don't know the actual odds and edge you are playing with (because they move) which makes your Kelly ideal bet size a guess rather than a fixed number.

Here's more information: https://en.wikipedia.org/wiki/Kelly_criterion


Using new words for existing concepts shows a lack of interest to delve deep into the existing state-of-the-art. Anti-fragile is another example. Why not call it sturdy, robust, solid. Dude, English's been around long enough to have words for that concept!

Taleb's merits are more literally than substantial. He's - and I hesitate to say this - somewhat of a charlatan. He targets an audience that considers itself smart, casts experts outside as idiots or malicious hucksters (but not his readers of course, who, with his help, see through the ruse), uses a few TedX-ie rethorics to present a supposed alternative revolutionary idea (like using big words, or using unexpected multi-disciplinary comparisons), and all done with supreme confidence.

As long as he's selling books and talks, it's fine and fun, but I would never ask him to look after my laptop when going to the bathroom in Starbucks.


>Anti-fragile is another example. Why not call it sturdy, robust, solid. Dude, English's been around long enough to have words for that concept!

He literally addresses that point in his book "Antifragile." The point he was wanting to make is that the systems he talks about are not merely resilient or resistant to stress or damage -- but rather they are systems that get more robust or strong as a direct result of damage or stress. That's the property that is harder to find a nice concise English word for.


I did not know that, thank you.


Why are you willing to call him a charlatan or claim he lacks the character to look after your laptop when you haven’t even read the book you are criticizing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: