
St. Petersburg paradox - gmac
https://en.wikipedia.org/wiki/St._Petersburg_paradox
======
mikeash
Infinity can be difficult to grasp.

Fortunately, there is a huge difference between "infinite" and "really large,"
which is often key. So applying these results to the real world often doesn't
work out.

As the Wikipedia article points out, although the expected value of the game
is infinite if the casino has infinite money, it's not only finite but quite
small if the casino has finite money. Even if the casino were backed by the
entire world GDP, the expected value of the game is only around $50.

This to me is a satisfying resolution to the problem presented.

~~~
femto113
These analyses mostly ignore the obvious fact that the value of infinite money
isn't, in fact, infinite, because eventually you'd run out of things to buy.
Using the aggregate wealth of the entire world (which is probably in the
neighborhood of $250 trillion[1]) as a bankroll yields a expected value of
about $50.

[1] [http://blogs.reuters.com/felix-salmon/2014/04/04/stop-
adding...](http://blogs.reuters.com/felix-salmon/2014/04/04/stop-adding-up-
the-wealth-of-the-poor/)

~~~
mikeash
That's somewhat covered in the "Expected utility theory" section of the
Wikipedia article. In short, if you double your money it doesn't double the
usefulness you get out of it, so you have to take that into account.

However, that analysis assumes that additional money always adds _some_
additional usefulness, and then you can make the problem reappear. To get rid
of it entirely you have to declare that there is some point beyond which
provides zero additional value no matter how much more money you add. And as
you say, such a point surely exists.

~~~
cperciva
_To get rid of it entirely you have to declare that there is some point beyond
which provides zero additional value no matter how much more money you add._

No, you merely have to declare that there is some amount of utility which can
never be obtained no matter how much money you have. (Or as the Simpsons put
it: There's one thing you can't buy: A dinosaur.)

In mathematical terms, if your utility function is f(x) = 1 - 1/x, every
marginal dollar adds utility; but the added utility is never enough to make
the gamble profitable.

~~~
AstralStorm
Oh but you can but a dinosaur - a dead one. Given enough money you could
potentially clone one too.

Utility function analysis is arbitrary just as the article says. Compare this
to ergodic theory (used for proper dynamic system analysis), where you can
clearly derive the value function over number of plays, including infinite
number.

What an ergodic process is you can see here:
[https://en.wikipedia.org/wiki/Ergodic_process](https://en.wikipedia.org/wiki/Ergodic_process)
By computing the limit of the ensemble average you will derive the logarithmic
utility function. This works for finite resources as well, but then you can
also derive the time it takes to bankrupt the bank, as well as probability of
that ever happening.

------
theresistor
I think the problem is with the assumption that expected wins are a good guide
of behavior at the granularity of individual plays of the game.

The expected win is infinite because very rare scenarios have huge payouts.
However, any one play of the game has a 50% chance of paying out nothing. If I
get to play the game a large enough number of times for the rare scenarios to
actually occur, then I would be willing to pay a higher price than if I only
got one shot at it.

~~~
ikeboy
Do you think that maximizing expected utility in general is not how a rational
person _should_ act? If yes, how do you deal with
[https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenster...](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem)?

~~~
Zakharov
Expected utility is great, expected money is silly.

~~~
ikeboy
I don't think that's enough to solve this problem. If you replace money by
utility, it seems like it would still be a paradox.

------
mgraczyk
At least for me, the comparison to the expected payouts of the "finite
versions" completely resolves the paradox. People aren't used to reasoning
about infinities, especially when dealing with such tangible quantities as
money. Very large and very small numbers confuse us. It's no wonder that when
faced with astronomically large payouts at infinitesimal probabilies, our
intuition disagrees with "the math". We think about money intuitively in terms
of scales that are meaningful in the real world. We can't imagine playing
against a casino with $10^100, so we would not be willing to pay $330 to play,
which would be the expected payoff for such a casino.

On a related note, we don't have a way of measuring the value of money beyond
quantities we subconsciously label " all the money". Is it better to have
$10^90 or $10^100? In the computation of the expectation, the difference is
crucial. To a human player, there is no distinction.

~~~
a3voices
$10^90 and $10^100 are identical since both would represent 99.9999...% of all
dollars. Spending just a small fraction would flood the market and decrease
the value of USD. Consequently, unless you have a large military at your
disposal, the U.S. government will come after you regardless of the money's
legality since you represent a threat to the monetary system.

~~~
cheepin
Presumably, with that amount of cash you could buy enough political power that
the US military would be on your team

------
danbruc
I don't have infinite money and time to play this forever. And there is a
certain probability that I will lose $10, $100 or $1000 and have to stop
playing because I no longer can or want to afford it. I wouldn't mind losing
$10 with near certainty but I would only play if I had a good chance to win
some money before losing $1000 and having to stop. And I would not risk losing
$1000 if the expected gain is small, say $100, or if I had to play the entire
day to get there. Probably not to easy to quantify but certainly doable.

~~~
pix64
If I pay 2 dollars to play, I have a 50% chance each game of at least breaking
even and being able to continue playing.

If I pay 4 dollars to play, I have a 25% chance each game of at least breaking
even and being able to continue playing.

~~~
TazeTSchnitzel
The minimum payout is $2. You have 100% chance each game of at least breaking
even if you pay $2 to play.

------
ajmurmann
In a way this is like waiting for a good black swan event to happen. It is
going to happen and it will be great, but do you have enough money to play
long enough till it happens? I think we don't play this kind of game for the
same reason we are bad preparing for black swan events. If I spent huge
amounts of money on preparing for a black swan disaster that might happen
chances are still high that I will only get theoretical benefits from the
investment and nothing ever happens during my lifetime. Similarly here, I
statistically win, but in practice I have a high chance of running either out
of money or out of time, because I don't have infinite of those, even if the
imaginary casino had infinite money.

~~~
pixl97
> It is going to happen and it will be great, but do you have enough money to
> play long enough till it happens?

I think this can also be said as "The market can remain irrational longer than
you can remain solvent".

------
justinsaccount
I wrote up a quick version of this in python (assuming I read it right).

Rather than try to keep track of profit and loss it just plays for 1000 rounds
and figures out the max cost per game that would have broken even. There's
different ways of doing it obviously.. like assign actual values to starting
money and cost per game and then play until a 20% profit or bankruptcy.

From what I see for 1000 rounds, paying $6 would mean a profit 100% of the
time. Also, average breakeven should probably take the 90th percentile.

    
    
      #!/usr/bin/env python3
      import random
      
      def play():
          profit = 2
          while True:
              if random.choice([0,1]):
                  return profit
              profit *=2
      
      def simulate():
          profit = 0
          plays = 0
          max_win = 0
          threshold=10
      
          for i in range(1, 1001):
              win = play()
              profit += win
              max_win = max(win, max_win)
          print("Round: {}, win: {}, Max win: {}, profit: {}, Breakeven cost: {:0.2f}".format(i, win, max_win, profit, profit/i))
          return profit/i
      
      def avg(l):
          return sum(l) / len(l)
      
      def main():
          breakevens = []
          for _ in range(1000):
              be = simulate()
              breakevens.append(be)
      
          print("Lowest breakeven cost: {:0.2f}".format(min(breakevens)))
          print("Average breakeven cost: {:0.2f}".format(avg(breakevens)))
          print("Highest breakeven cost: {:0.2f}".format(max(breakevens)))
      
      if __name__ == "__main__":
          main()

------
Houshalter
The classic solution is that $s are not equal to utility. People have some
discount rate. But you can trivially rephrase the problem, so the $ increase
at the same rate of your discount rate, and you get the same problem.

Only a bounded utility function is a solution - there must be some amount of
money where literally even a trillion dollars more doesn't matter.

That seems acceptable, but it still means this game is worth some amount to
play, and that amount can still grow very large before reaching your bound.
Also there must be some things which we can't bound.

There is a very related problem called Pascal's Mugging:
[http://wiki.lesswrong.com/wiki/Pascal's_mugging](http://wiki.lesswrong.com/wiki/Pascal's_mugging)

In Pascal's mugging, a mugger asks you to pay him $5, or he will kill 3↑↑↑3
people (an incomprehensibly huge number, that for all intents in purposes,
might as well be infinity.) He says that he is the matrix lord and likes
playing games with simulated people.

This is of course, incredibly unlikely. But is the probability he is telling
the truth greater than 1/3↑↑↑3? Is $5 worth more than a human life? If so you
should pay him.

This is a general problem with expected utility. EU only cares about the
_average_ utility. The utility of all the possible outcomes, weighted by their
probability. A single outlier can throw the average case off a lot.

EU is forced to trade away utility from the majority of probable outcomes to
really weird unlikely outcomes, like the mugger, or winning an infinite series
of coin flips. EU is optimal in most everyday problems, but it can fail in
extreme cases.

~~~
arielby
The problem with the original lottery is that most of its value is from high-
EV tiny-probability events, e.g. the 2^{-50} probability of winning 2^50
dollars. The practical result of that event does not seem to be worth 2^50
utilons, to say the least. It is hard to think about events worth that many.

However, many perturbations of this lottery can actually be good bets.

For example, suppose you gain 3^n dollars with probability 2^{-n}. Then you
have a 1/128 chance of winning $2187, a 1/256 change of winning $6561, and
this game starts looking much nicer.

The "Pascal's Mugging" divergence is a different problem, where Solomonoff-
style priors imply negative-exponential probabilities of Busy-Beaverish
payoffs. Ordinary priors don't really have this problem.

~~~
Houshalter
It seems like the same class of problems, because they are both about high
payoff, low probability bets. Solomonoff induction is just a formalization
used to show the result is very general.

Any reasonable prior should have similar cases. Unless you really believe the
mugger being a matrix lord has _0_ probability, or that God has _0_
probability, etc, you are forced to act as if they are true. Which results in
wasted effort in the vast majority of possible outcomes, in exchange for a
massive payoff in incredibly rare outcomes.

Assigning 0 probability is not something you should do lightly. It would mean
you could wake up and find yourself outside of the matrix, and you still would
not believe it had any chance of being true. It would mean God himself could
come to and say "yeah it's all real." And you would be forced to believe there
is still _0_ probability he exists.

------
andrewla
Assigning a finite value is a classic "Black Swan" fat-tail misapprehension.
How much you would pay to play it is only one side of the equation; the other
is how much you would charge to allow someone else to play it.

Our inability to reason about infrequent events means that a casino that plays
this game may look like a very attractive proposition, because in practice
(finite small-scale simulation) the expected payouts are quite reasonable. So
it would behoove the casino to leverage itself up to its eyeballs to maximize
the return on investment.

While the numbers for the "finite versions" part of the article seem quite
reasonable, it's easy to forget that when leverage comes into play, a game
like this can not only bankrupt the casino, but can ripple back to all of the
investors (lenders) as a loss that far exceeds the profits in the history of
the casino.

~~~
AstralStorm
Even better, as any casino would be quickly bankrupt by playing this game
multiple times, even without leverage. The small payoffs quickly add up.

This is why all games of chance have a "bank wins" feature.

------
arrel
I believe the equation used on the page is wrong. It states projected winnings
as 1/2 x $2 + 1/4 * $4 + ... But you only have a 1/4 chance of winning $2,
since have the time you get a heads on first flip, you also get a heads on the
second flip, meaning you'll win more than $2.

The equation should be: 1/4 x $2 + 1/8 x $4 ...

Still goes to infinity, but at half the pace.

~~~
rabbidruster
I think if you get tails on the first flip you still get the $2 payout.

~~~
arrel
You're right! Turns out Wikipedia doesn't need me after all.

------
caf
This is essentially the flip side to the more banal paradox of the casino
itself: the expected value of casino games are strictly negative for the
player, but people choose to play them anyway. In the case of the St
Petersburg game, the variance means that you're far more likely to lose money
than win it in a single game, despite the highly positive EV; in the case of
the casino, the variance means that you can often walk out ahead after playing
roulette for an hour or two despite the slightly negative EV.

------
thetruthseeker1
Also, there is another point I want to make, lot of readers assume that they
can only play it once. I dont think that is a constraint at all. If that is a
constraint, problem is simplified. If you were to play only once you have less
than 1/(amount) chance of seeing your investment or more.

If you invested 1024$ (10 successive head rolls), you have less than 1/1024
chance of seeing it back.

Of course, you start from 2$ and not 0$ (I have simplified a few things to
drive the point).

~~~
dagss
There is nothing in the game about 1024/being 10 rolls. You pay X to enter tge
game on start, then you roll up to infinitely many times, and the paradox is
what X should be.

------
queryly
It is tricky. The expected value is mathematics term people invented for easy
calculation. It is different from the "value" in human perception.

How much is a $1 lottery ticket worth? It is probably 40 cents depending on
the probability. It's worth zero for all the people who lose and millions of
dollar for the luck one. There is no "between" value which is what expected
value represents.

------
xrange
How long does each coin flip take, and how soon can I repeat the game? Does
each flip takes an equal and finite amount of time? Do subsequent flips take
half as much time? Can I play Graham's number of games in an hour? An
countable infinite number of games in an hour? If we're wondering about "most
people" they might be sub-consciously taking the time factor into account.

------
tlb
Log utility is a fairly extreme way of discounting huge rewards. A less
extreme way is to consider that there is only so much money or wealth in the
world, on the order of $100T. Huge payouts would require printing more money,
reducing the relative value of all money. So a linear utility function for a
finite world looks like x / (1 + x/$100T).

------
aidenn0
Another interesting risk analysis problem is where you have a million boxes.
All but one box has a fixed sum of money X under it, and the remaining box has
a bomb that will kill you if you open it.

How high would X have to be for you to be willing to play the game?

------
swehner
So the conflict of this paradox is that most people will not play this game,
but it actually is a worthwhile game to play - if someone were to offer you to
play it.

That's not the usual meaning of the word paradox.

------
jcr
The original 1738 paper by Bernoulli:

[https://news.ycombinator.com/item?id=9902047](https://news.ycombinator.com/item?id=9902047)

------
throwaway1967
This Platonic idea that just because it works on theory, therefore reality
works just the same, is an old misunderstanding.

But before I criticize these "attempts" at solving this problem by these deft
mathematicians, I'd like to see a single video example of 10 coin tosses all
coming up either heads or tails.

After we can all see that this can happen, then we can start worrying about
how much money a casino in the real world would charge for such game being
played in the real world, by real players, with real coins.

It is amazing that out of all mathematicians listed on Wikipedia that
attempted this, only one considered actually sampling (supposedly simulated
coin tosses).

This can be easily simulated, but they'd rather stay within the comfy confines
of calculation, so that presumably they can publish more papers.

~~~
Someone
[http://www.youtube.com/watch?v=rwvIGNXY21Y](http://www.youtube.com/watch?v=rwvIGNXY21Y)

Not that hard to do. Just a matter of perseverance and a bit of luck. Yes, it
could be faked, but I trust him.

~~~
Spellman
Similarly, here's another fellow rolling a Yahtzee.
[https://www.youtube.com/watch?v=fiTwar7mFws](https://www.youtube.com/watch?v=fiTwar7mFws)

