Hacker News new | past | comments | ask | show | jobs | submit login
The ‘Hot Hand’ Debate Gets Flipped on Its Head (wsj.com)
18 points by ssivark on Sept 29, 2015 | hide | past | favorite | 22 comments

Here's the paper's first paragraph, matching what the article said:

"Jack takes a coin from his pocket and decides that he will flip it 4 times in a row, writing down the outcome of each flip on a scrap of paper. After he is done flipping, he will look at the flips that immediately followed an outcome of heads, and compute the relative frequency of heads on those flips. Because the coin is fair, Jack of course expects this empirical probability of heads to be equal to the true probability of flipping a heads: 0.5. Shockingly, Jack is wrong."

But actually Jack is right. Here are all the possibilities. A "streak" means a head was followed by a head, and a "break" means it was followed by a tail.

          Streaks Breaks
  TTHT    0       1
  TTHH    1       0
  THTT    0       1
  THTH    0       1
  THHT    1       1
  THHH    2       0
  HTTT    0       1
  HTTH    0       1
  HTHT    0       2
  HTHH    1       1
  HHTT    1       1
  HHTH    1       1
  HHHT    2       1
  HHHH    3       0
  total   12      12
The paper's argument later is more complex than the first paragraph implies, putting the sequences in groups with fixed numbers of heads. I don't see the point of that but it doesn't seem to help.

They get into some math, but just counting the cases doesn't seem to support their argument at all. If anyone can explain their argument in a simple way, I'm interested.

If they're really confident in this, they should go to Vegas, find a high-rolling gambler, and start betting on coin flips. After each heads, offer 55/45 odds that the next coin flip will be heads. I'm sure it won't be hard to find takers.

See my MC in a post below. I think I understand what they are on about. It's biased against repeating in short runs.

This seems similar to the Monty Hall problem, which is also quite unintuitive. Perhaps what makes it work is that the person doing the predicting sees the sequence as a whole, and hence, future events are dependent upon historical ones -- and are no longer purely random.

Consider this question, "You're somewhere in the middle of a 4 coin toss, and the last toss came up heads, what's the probability of the next one coming up heads?" I think the paper is saying it's a 40% chance -- you know you're in an finite series, and, you have partial information about that series.

That's what I counted, and got a 50% chance.

Is this a joke or am I the only one who thinks this is a ridiculous article?

Very naive Monte Carlo: https://gist.github.com/arnists/228c4e77b1e2aa6d33f1

Consistently comes back around p. 50 runs have a span of .25%.

Haven't read the paper yet, but if the PRNG isn't broken, I'd say it invalidates the naive presentation at the start of the article.

EDIT: I think I understand the fallacy the authors present. This holds true for short runs. E(H|H) will be lower in short runs but asymptotically approaches p when number of trials rise.

I enumerated the sequences that the article mentioned, and counted how often a tail filled a head, and vice versa, and got 12 instances where a head follows a head, and 10 where a tail followed a head. So there is a difference for just counting up all the possible 4 flip sequences where at least one of the first three is a head. However, doing a randomised test where I generated a random 4 length sequence, rejecting it if none of the first three was a head, then doing the same test showed no real difference.

Code here https://github.com/gregryork/Flips/tree/master/src/flips

Take a look at the updated gist. There's an asymptote for P(H|H) when trials grow approaching p.

EDIT: Graph is empirical H|H against run length (output of the last function in a linear graph)

I'm not really sure what I am looking at there. Is the X axis the number of trials run, or the length of the run?

What do you mean by short runs? If you mean just the total sample of 4, probability H|H is .5.

Actually my MC predicts below 40%, authors state 42%. See the graph in the gist. It plots the last function in the gist.

Ok but how? There are only 14 cases. I enumerated them all and got 50% heads given previous heads. How are you counting?

Or to put it another way, how would you structure a bar bet so you come out ahead?

Bet even money on a coin flip you choose from a set of 4, discarding first toss. Bet against last result every time. This would maximize your gain if the results hold. I'm not convinced they do, but the MC suggests it does.

Give me a slight edge over 50% and I would take that bet.

Updated the gist with test code for this bet. With runs of 1000 4 coin runs, you sometimes lose and sometimes win, but trend is you win more than you lose. You can play with the numbers to figure out where the long-term odds work against you, but a bar-bet designed by me should favor me so I wouldn't give you better odds than 50%. Fair-coin rule of bar-betting see?

There are large fluctuations in this bet. sum([test_hothand(0.5,3) for i in xrange(10)]) has returned a range of (-348,+312) but biased towards positive for the patron. A combination of 10^6 runs was 320 wins in favor of the patron.

sum([sum([test_hothand(0.5,3) for i in xrange(10)]) for x in xrange(10)]) fluctuates too but it's sum is still in favour of the patron.

But a good bar bet should convince the mark he has the advantage. If the paper is correct then one direction has odds of 58%. You should be able to split the difference, offering $0.54 to my $0.46 and still have a 4% edge. Fool that I am, I would think I have the same edge, since I erroneously think the odds are 50/50. Since that makes us both convinced we have an identical edge this is the fairest bet.

52/48 is the max since that's break-even according to the theory (favoring the bartender by 8%). The effect is slight and fickle enough that you can't structure a "win-every-time" bet. For large enough runs you will always have loser runs with arbitrarily bad losses. In the long run patron wins, given enough time.

I still need to see an enumeration before I'll believe it's not 50/50. I would take that bet. Blackjack card counters can make a living on a 2% edge in Vegas.

(Of course if I convincingly lose the bet, I learn a way to make a living on bar bets in Vegas, so I still win in the long run :)

The effect is too slight. No matter how big you make the runs, once in a while you get an unfortunate grouping and you lose. Most of the time you win. Run the gist several times with different parameters.

What I mean is, there are few enough cases so we can put them all on one piece of paper and count. That's what I did in my first comment here. There are exactly as many cases of HT and HH, hence my belief in 50% odds. That's pretty much the standard way to tackle basic probability questions.

But maybe I'm misinterpreting what they mean and there's some way to structure a bet so the odds fall to one side or the other. I just having found it yet. If we could find it and put it on paper, we'd have a clear understanding of why it works.

Plus, it'd probably be pretty counterintuitive and with a decent bankroll would be reliably profitable over the long term, in certain environments.

Your MC results are interesting, I've looked at your code and it seems straightforward. But counting all possible cases is essentially a mathematical proof, and I just can't find any mistakes in it, or figure out how it could be wrong.

I think that this is flawed in the sense that most people (including the article's author) might misinterpret it. The paper assumes that we know when a streak ends. So given that we have had a streak of tails and that it's broken what's the probability of heads? That will be biased toward heads. I've only skimmed the paper, but maybe they ultimately mean this is the source of the incorrect bias in the layman's intuition?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact