
Two Envelopes Problem - atomon
http://en.wikipedia.org/wiki/Two_envelopes_problem
======
yanowitz
Isn't the expected value of both envelopes, prior to picking either, 0.5 * X,
where X is the sum of the envelopes' value?

So one way to argue this is a fallacious problem is that you can't take an
unknown (the actual value of the envelope that you picked) and pretend it's a
known. The value of the envelope you are holding, prior to opening it, is 0.5
* X. The value of the other envelope is also (0.5 * (.66 * X)) + (0.5 * (.33 *
X)) == 0.5 * X.

This may be too ill-expressed -- but there's something wrong with treating the
value of an unopened envelope as anything other than completely probabilistic.

Put differently, don't you have to say

0.5 chance I picked the bigger envelope (call that A). In which case, the
other envelope is 1/2A. So if I switch, I lose 1/2A.

0.5 chance I picked the smaller envelope (A/2). So if I switch, I gain 1/2A.

Which means if I switch, half the time, I gain 1/2A and half the time I lose
1/2A, for an expected gain of 0 from switching.

~~~
sunir
That is the right answer. The flaw is that the "A"s in the formula are
different actual values depending on which envelope you have chosen, but the
analysis has them as the same.

In other words the paradoxical analysis is done relatively, which is wrong.
You have to analyze probability from an absolute frame of reference.

~~~
miloshh
Well, of course you're right, as is yanowitz. But there is something
unsatisfying about this explanation. Intuitively it is obvious, but why is the
relative reasoning incorrect?

There is no formal distinction between absolute and relative quantities, and
no theorem that says that expected values can only be taken from absolute
quantities. There are just random variables, and these have some
distributions, and they can be independent or not. Nothing prevents you from
taking an expectation of a random variable that is a ratio of two other random
variables.

Another angle - I could say that our definition of expected value, based on
weighted arithmetic average, is completely arbitrary, and instead define my
own expected value G[X], based on the geometric average. Suddenly, the
relative approach becomes correct: sqrt(2 * 0.5) = 1, so the expected relative
improvement from switching is 1. What the hell is going on?

~~~
scotty79
> Intuitively it is obvious, but why is the relative reasoning incorrect?

Because in the reasoning symbol A is used to denote "expected value of a
random variable representing the amount in the envelope you picked" but later
on "expected value of a random variable representing the amount in the
envelope you picked provided that you picked envelope with more money" and
"expected value of a random variable representing the amount in the envelope
you picked provided that you picked envelope with less money".

Error comes from splitting reasoning for two cases and failing to factor in
the condition on which you split in your further calculation of the cases.

If you solve some equation and you have to split your reasoning in two (or
more) cases, the while reasoning those cases you have to remember the
condition that you assumed for given case and factor it in (possibly toss away
some solutions).

I'd like to see some day less one dimensional way of writing down mathematic
reasoning so one can see how information flows through through the course of a
proof and errors like this would show up more easily.

~~~
miloshh
OK, now I see there are two slightly different possible formalizations of the
problem:

1\. You are told that the two envelopes contain amounts A and 2A, but you
aren't told what A is. After you pick one envelope, you _are_ allowed to open
it, and then you're given the choice to switch. Here the optimal move depends
on the distribution of A, and if you don't know it, you can't do much other
than pick randomly. After some googling, this is the more common
formalization, and it is analyzed in several math papers and blogs.

2\. (The version I was assuming.) You are told that the envelopes have, say,
$100 and $200. You pick one and you _aren't_ allowed to open it yet. Now
you're given the option to switch one last time. There is no problem with
undefined priors and weird conditional probabilities in this version. However,
the freaking paradox still holds! The expected value you get by switching _is_
$150, no question about that. But the expected relative gain you get by
switching _is_ 1.25, there's also no question about that! This is the real
paradox to me. Taking an expectation of a relative quantity is intuitively
wrong, but why exactly?

~~~
scotty79
Ad.1 If in the reasoning A is a fixed number then you can't assume that you
have 1/2 probability that the other envelope contains more money. Probability
depends on distribution and if you don't know the distribution it cannot be
calculated. Fact that it cannot be calculated does not entitle you to assuming
its 1/2.

If you have problem with that imagine stack of cards. You and your opponent
pick one card. Whoever picks higher card wins. Until you see your card you
have 1/2 probability of winning. But after you see you just picked 3 you see
your probability of winning sharply changes (depending on what cards are left
in the stack).

Ad.2 When you don't open your envelope you must treat amount in it (A) as
random variable. Then all I wrote in post above applies.

It doesn't matter that you did not looked into an envelope. Because whether
you got higher or lower amount is significant and you don't know which
occurred you must split your reasoning and consider both cases. But upon
splitting you must remember that condition you assume for each given branch of
your reasoning must be taken into account. You can do this by pruning your
random variable A with the appropriate condition and that influences its
expected value. So by all means you can sum up expected values but not before
appropriately adjusting them with the conditions you assumed for branches of
your reasoning.

Consider solving equation x^2 + yx +1 = 0 While solving your may split you
reasoning to three cases depending on whether y^2-4 is larger, smaller or
equal to zero (even though you don't know what is the value of y). But when
summing up solution you must remember that you assumed something about y in
your branches so for the x you've found y can no longer be any real number.

~~~
miloshh
Ad 1 - Not sure what you're trying to say, all I said is that you don't know
the distribution of A - if you knew it, you could make an informed decision on
whether to switch.

Ad 2 - If you carry out the analysis you're trying to teach me, you'll find
that the expected relative gain from switching is 1.25. Now explain that.
(You'll also find that the expected absolute gain is 0 - that's not a
paradox.)

~~~
scotty79
Let's denote expected value of money in envelope I picked with A.

If I want to switch I have to consider two cases. Let's label them e1 and e2.
e1 means I picked envelope with smaller amount of money. e2 means I picked
envelope with larger amount of money. Probability of each case is 1/2.

What is the expected value of money I'll have after switch?

Let's consider first case:

S(e1) = 2 * A(e1)

which mens two times the amount of money I had PROVIDED THAT I PICKED ENVELOPE
WITH LESS MONEY.

A(e1) is not equal A If I denote exact amount of dollars in envelope with less
money by X then A = 1/2 * X + 1/2 * 2 * X = 3/2 * X but A(e1) = X

Similarly considering case e2 I get expected value of money after switch equal

S(e2) = 1/2 * A(e2)

which means half of the amount of money I had PROVIDED THAT I PICKED ENVELOPE
WITH MORE MONEY.

Again A(e2) is not equal A but 2 * X, if you prefer not to use X then A(e1) =
2/3 * A and A(e2) = 4/3 * A

To sum up probability of each case is 1/2 so S that I use to donate expected
value of money I could have after switch is equal to:

S = 1/2 * 2 * X + 1/2 * 1/2 * 2 * X = 3/2 * X = A

or if you want without using X

S = 1/2 * 2 * 2/3 * A + 1/2 * 1/2 * 4/3 * A = A

So there is no gain or loss from switching. You can switch zero or more times
without changing expected value of money you will get after opening your
envelope.

Similar reasoning can be conducted for any amount of envelopes and any
probability distribution and I guess it will to same conclusion. I'm not sure
if observable symmetry of situation can be taken as proof of this conjecture
but I sincerely hope so.

Paradox comes from using same symbol A to denote three different expected
values of random variables which I denote above as A, A(e1) and A(e2)

If you have further doubts about above "relative" reasoning I'll gladly try to
clear them up.

~~~
miloshh
You just painstakingly carried out the absolute analysis, which I know works.

Let X be the value in the envelope you have, and Y in the other one (X and Y
are both random variables with well-known distributions). Then E[Y/X] = 1.25.
That's what I wanted you to explain. You just keep saying that E[X] = E[Y],
which I know.

Note that this paradox would not arise if X and Y were independent, since then
E[Y/X] = E[Y] / E[X] = 1.

~~~
scotty79
What reasoning leads to claim that E[Y/X] = 1.25 ?

It can't be the reasoning from wikipedia about two envelopes problem because
it has error in steps 4 and 5 so results in step 7 and further are
nonsensical.

I painstakingly shown how this reasoning should look like to be right and that
in fact E[Y/X] = E[Y] / E[X] = 1

------
Dove
The wiki page doesn't explain the flaw in the logic. I think it is this:

Let X denote the value of the larger envelope, and A the value of the envelope
we have in hand.

If we have the larger envelope, the other envelope is worth A/2 (where A=X).

If we have the smaller envelope, the other envelope is worth 2A'. (where
A'=X/2)

So far so good, and this is all legal. The error comes when we try to add:

(0.5 A) + (0.5 A'/2) /= 5/4 A

The A's are not the same. You can't add them as if they were.

~~~
sp332
The wiki page used to have the solution. Apparently there is some arguing
going on about sources, but this explanation is correct:
[http://en.wikipedia.org/wiki/Talk:Two_envelopes_problem#Solu...](http://en.wikipedia.org/wiki/Talk:Two_envelopes_problem#Solution_to_original_paradox_is_much_simpler)
Also, yanowitz in this HN thread has a good (equivalent) explanation.

~~~
btilly
And yet, that explanation isn't correct.

------
omaranto
I like Raymond Smullyan's version even better, it removes all probability
issues and is so simple you feel it must be easy to figure out.

The setup is the same: two envelopes, once with twice as much money as the
other, you pick one and before you open you are offered to switch. If you do
switch you might either win money or lose money relative to what was in the
envelope you chose first. Let's compare the possible gain to the possible loss
(without worrying about the probability of having a gain or a loss):

1\. The possible gain is equal to the possible loss.

Proof: The two envelopes contain x and 2x, so you'd gain x if switching from x
to 2x, and you'd lose x if switching from 2x to x. The possible gain and the
possible loss are both equal to x.

2\. The possible win is larger than the possible loss.

Proof: Say the envelope you chose first has y. Then the other envelope has
either y/2 or 2y. So if you gain money by switching you gain 2y - y = y, but
if you lose in the switch, you lose y - y/2 = y/2. Clearly y > y/2.

This version kept up at night for days when I first read it. I never figured
it out --not that I had any chance of doing so, given that Smullyan couldn't
figure it out either--, I just got used to not understanding it...

~~~
jeromec
This doesn't eliminate probability, and it's really the same flaw, just with
different wording. The flaw is still that the variable is undefined (or
infinite probability) in one part of the word problem, but _treated as
defined_ in another. So, start with when you first chose the envelope which
has y. At this point "y" is undefined, and can be the higher or lower amount.
That's the key. Yet, you then continue to propose that by switching you gain
2y - y = y. That equation is not okay, because "y" could have been the higher
amount, and the equation breaks down. Once you assert the equation holds true,
you have thus defined the original value of "y" from that first envelope
choice (i.e. "y" was lower). The same flaw surfaces by asserting your switch
to lose equation is true, as well.

Edit: BTW, the reason #1 works okay is because you define upfront the variable
value ("x") as being either the higher or lower of the two possibilities,
before constructing your equation(s).

~~~
omaranto
You mention this doesn't eliminate probability but don't explain why you think
that might be the case. I just meant that the argument Smullyan presents does
not involve probabilities, random variables, distributions, expected values,
etc. (This can be verified by reading it again and checking those concepts do
not appear. ;)).

I don't find your argument convincing either, maybe it helps to remove the
"undefined variable" you complain about:

The first envelope chosen has some definite amount of money, let's say $10
(but the same argument can be adapted to any amount). Then the other envelope
must have either $5 or $20. If it is $5, switching loses you $5. If it is $10,
switching gains you $10.

Imagine actually doing this experiment with the two envelopes, opening the
first and seeing $10. Wouldn't you say the other contains either $5 or $20?

~~~
jeromec
It doesn't eliminate probability because it's the exact same envelope
experiment. ;) In other words, there is still a 50% probability of gaining or
losing. My post above isn't an argument, it's an explanation. The "undefined
variable" is established by your own proposal, i.e., in #2 you first choose an
envelope which has an amount we call "y". At that point "y" is undefined,
because you specifically explain it could be either the higher or lower
amount.

I think your confusion lies in not considering carefully enough the meaning of
your wording. In other words, you're performing a play on words to get a
specific result, akin to the "heads I win, tails you lose" word play. Let me
try to explain further below:

 _Imagine actually doing this experiment with the two envelopes, opening the
first and seeing $10. Wouldn't you say the other contains either $5 or $20?_

No. The other envelope contains the amount the person who prepared the
envelopes placed in it.

Try defining _both_ values explicitly and this issue may become clearer.
Imagine one envelope contains $10 and the other contains $5. Go through your
above proposed equations using strictly those values. Remember, in a real
experiment that would be completely valid.

------
robinhouston
That Wikipedia article is really bad. Here is a good article about the Two
Envelopes Paradox, by Keith Devlin:
<http://www.maa.org/devlin/devlin_0708_04.html>

------
te_platt
While not exactly the same problem this is what cleared it up for me:

First pick a positive integer at random. Now pick a second integer at random.
What is the probability that the second number is larger? The answer seems to
be 1 because after the first number is chosen there are finitely many numbers
smaller but infinitely many larger. But what if two people (A and B) choose
two positive integers at random but don't tell anyone just yet. What is the
probability that A>B? By symmetry we can see it should be 0.5 . So A tells his
number. Now the probability that A>B seems to jump to 0 because there are
finitely many numbers smaller than A but infinitely many larger. But why
should the probability change when we don't even know what A is, just that it
known?

The problem lies with the phrase "choose a positive integer at random".
However you choose the number you will be hopelessly biased toward 0 because
every number you choose will be finite. Now think about the envelope problem
but add the condition that the largest amount that any envelope will have is
X. Then if your envelope has N in it there is a chance that 2N is larger than
X. That cut's down on the chance that switching gets you more. Magically
(because I'm too lazy to actually write it all out) it turns out it cuts it
down just so the probability of getting more money by switching works out to
.5

~~~
jeromec
_So A tells his number. Now the probability that A >B seems to jump to 0
because there are finitely many numbers smaller than A but infinitely many
larger._

This is not correct. You said B chose a positive integer. Therefore, there are
not infinitely many larger numbers B's choice can be.

~~~
InclinedPlane
Sure there are. No matter what finite positive integer you choose there are
infinitely more finite positive integers greater than that number.

~~~
jeromec
That's true, but what te_platt said was "two people (A and B) choose two
positive integers at random"

If two people, A and B, have chosen -- that's choose in the _past tense_ \--
two positive integers, there are not infinitely more positive integers B's
number can be greater than A's number. Why? Again, because B has already made
a choice.

~~~
InclinedPlane
Actually, it doesn't matter. If you rephrase the problem so that B picks after
A and you ask the question "is B likely to pick a larger number than A" before
B picks then the answer should be "yes".

However, this is another example of being misled in reasoning. But not related
to timing. The important revelation is understanding that there's no such
thing as a method for generating truly random numbers across the entirety of
the positive integers. Since there are infinitely many the probability of
picking any one integer is 1/infinity. _Any_ concrete random algorithm for
positive integers will be hopelessly biased toward 0 (since it will almost
certainly have a maximum value, and thus an infinite number of positive
integers above that maximum). As long as both A and B use the same algorithm
then the chances are equal that A or B will pick the larger number.

~~~
jeromec
_If you rephrase the problem so that B picks after A and you ask the question
"is B likely to pick a larger number than A" before B picks then the answer
should be "yes"._

I agree, but that wasn't what was stated. You've "moved the goal posts" so to
speak. ;) The OP said both A and B had chosen a number (but only one had
revealed it). That introduces TWO finite values into the experiment. The
timing for _when_ B makes the choice makes all the difference.

------
btilly
First a random observation. People use "expected value" to evaluate risky
decisions far too often. In many investment situations a more appropriate
measure is "expected value of the log of my worth". (See
<http://elem.com/~btilly/kelly-criterion/> for an explanation of why.) That
measure, in this problem, says that you should be indifferent about switching.

Admittedly that is coincidence - the investment reasoning for that rule has
little to do with why that rule works for this problem.

But it turns out that there is an even crazier twist to this problem. Change
the problem to say that you're allowed to look at the envelope you receive
before deciding whether to switch.

Would you believe that there is an algorithm for deciding whether to switch
that results in your getting the larger envelope more than 50% of the time?
The catch is that those odds depend on the unknown amounts in the envelopes.
But, no matter what is in the envelopes, you are guaranteed to get the larger
one more than 50% of the time.

See <http://www.perlmonks.org/?node_id=39366> for an explanation. Read it very
carefully, because the problem is very subtle. Even a slight change in what is
meant tends to render the problem ill defined and indeterminate. (And our
intuition is very bad.)

~~~
harryh
That blog post is wrong. If N is the difference between the values in the two
envelopes, then the chance that you will guess a value in this range is N/∞
which is 0.

~~~
sesqu
Not so. The only stipulation was that you must be able to guess a number in
any range (which is not possible in practice). It doesn't matter that the
probability of picking any particular number will be 0, as long as the
probability of picking any one between the two in the letters isn't 0. Any
time your guess hits the correct range, you win. Any other time, it depends on
whether you got the big envelope or the small one.

So knowing just one endpoint of a range is, oddly, evidence as to which way
the other endpoint lies - as long as it might be either one. You just take a
guess _that depends on the number you know_ in such a way that the smaller the
number you see, the more likely you are to guess it's the smaller of the two,
but are never certain. Presto, guaranteed you'll get more than 50% right, as
long as you get fed both the small and the large envelopes in equal
proportion.

If you can't be sure the other guy is playing fair and might favor giving you
the low envelope, you need to look at something like the Monty Hall problem.

------
jwegan
I think the problem is they effectively change a problem from having 2
possibilities (either you get $X or you get 2 * $X) to a problem that has 3
possibilities: A/2 (1 possible amount in the other envelope), A (the amount in
your envelope), and 2A (the other possible amount in the other envelope) by
fixing A relative to the hypothetical amount you could have in your envelope.

~~~
Uhhrrr
Yeah - it's not really (0.5 _A + 2_ A)/2, it's (0.5 _A1 + 2_ A2)/2, where
A1=2*X (as you have above) and A2=X.

Substituting, your expected value upon changing is 3X/2, which is, surprise
surprise, the same as the expected value for the envelope you already have.

------
trominos
The flaw in the argument for switching comes in its second line:

"1. I denote by A the amount in my selected envelope.

 _2\. The probability that A is the smaller amount is 1/2, and that it is the
larger amount is also 1/2._ "

If we're gonna be taking expected values we need to assume that the monetary
amounts in the two envelopes are generated from a probability distribution on
the nonnegative reals. Then the probability distribution for the amount of
money in the smaller envelope is going to be some kind of curve (call it S),
and obviously the probability distribution for the amount of money in the
bigger envelope is going to be the same curve only "stretched out" and
"squashed" by a factor of two (we'll call it B).

Now if you say "my envelope contains exactly A dollars," you can tell what the
probability is that your envelope is the smaller envelope by comparing the
relative heights of the two curves at the value A; let's denote these heights
by S(A) and B(A). It is certainly _possible_ that the two heights are the
same, in which case it's fifty-fifty that you have the bigger or smaller
envelope, the rest of the logic holds, and you should indeed switch, getting
an expected return of 5/4 * A.

But it is obviously impossible that _in general_ the two heights are the same
for any given A, because then the two probability distributions are the same
-- and that can't be true, because B is a squashed, stretched-out version of
S, and for a variety of fairly obvious reasons you can't squash and stretch
out a finite curve on the positive reals and get the same curve (unless that
curve is 0 everywhere). And so we can't conclude that _in general_ you should
switch, which is good because if you're not allowed to look at the money
before you switch it obviously doesn't matter whether you do or not (unless
the people running the game are messing with you).

------
mirkules
There's a similar problem with three closed doors, one of which contains a
prize behind it. You are asked to choose a door, and then shown one of the two
remaining doors which does NOT contain the prize. The question is should you
switch doors, given the opportunity? What is the probability that of getting
the prize if you switch, and if you don't switch?

To illustrate, let A, B, C be doors, and door C be the door with the prize.
You choose door A; the host tells you the prize is not in door B. What is the
probability you will get the prize if you switch your door?

~~~
necubi
No, this problem is different, and has a different solution. In your problem
it makes sense to switch when given the choice, whereas in this one it does
not. With the three doors you're given additional information once you choose
a door, which allows you to choose a door with 1/2 probability of being right,
rather than the 1/3 chance you started out with. In this problem you are given
no additional information, so the probability of choosing the right envelope
is the same when given a choice to switch as it is at the beginning (i.e.,
1/2).

~~~
mirkules
Actually, your probability of choosing the right door is 2/3 if you switch. I
agree though that this problem is not the same, it just reminded me of it -- I
guess you could say it's a similar class of problems.

------
CJefferson
The problem with this is very subtle.

The problem is saying that it is equally likely that the other envelope
contains half, or double, the money of the envelope we have chosen.

It is impossible to choose a value from an infinite series (the list of
possible monies), with every value having an equal opportunity of being
chosen.

Therefore the probability of money distribution must be non-even in some way,
and the argument falls to pieces.

It is not reasonable to say "Well, I don't know the probability, so I'll treat
it as 50/50".

~~~
philh
No, that's not the problem. You don't know how much money is in either
envelope, but it needn't have been selected randomly. The only randomness is
whether you the envelope you choose has the greater or lesser value, which is
just a uniform distribution on two points.

That being said,

>It is impossible to choose a value from an infinite series (the list of
possible monies), with every value having an equal opportunity of being
chosen.

this is also incorrect. You're right to say that you can't have a uniform
distribution over the possible values of money, but that's not just because
it's infinite. You can have a uniform distribution over [0,1] for example,
which is larger than the set of possible money values.

~~~
cdavidcash
>You don't know how much money is in either envelope, but it needn't have been
selected randomly.

This is exactly wrong, at least in the mathematical formalization of this
problem that everyone is assuming. The definition of "don't know how much
money" must be taken to be "has a uniform distribution".

If you want to use some reasoning that treats the amount of money as an
"unknown in an equation" then you are instead giving a solution that differs
for each possible value of the money and does not capture the random choices
we are trying to model.

By the way, no one said that uniform distributions on infinite sets don't
exist. He said "infinite series (the list of possible monies)", a slightly
mis-used term that clearly excludes [0,1], your chosen example.

------
jatenate
The swapping indefinitely does not work. For the first iteration, if I have A
in my envelope, then yes, the other envelope will either have 1/2A or 2A,
meaning the expected value is indeed 5/4A. The switching argument wrongly
assumes this is again the case after I switch. After I have switched, the
value of the original envelope is unchanged (meaning it is A with probability
1), so I should not switch back.

~~~
bmm6o
Are you seriously arguing that you should switch exactly once?

------
Ramfjord
I believe there's some problem with assuming that, given an envelope
containing X dollars, that the other envelope has a 50% chance of containing
2x, and a 50% chance of containing X/2. This step is clearly what leads us
into the paradox.

Suppose you open the first envelope, and find $100, and are given the option
to switch. If there is a 50/50 chance of the other envelope containing 200/50,
we could easily model this problem a million times and find that it averages
to 125.

The problem here is that the numbers have not been pre-determined in the
start. If a naive player repeatedly played a game where the numbers are
guaranteed to be 100 and 200, then the always switching strategy would be
exactly the same as the always staying strategy.

In this case, let the difference between the two envelopes be x. When you
switch, you have a 50% chance of losing x and a 50% chance of gaining x. There
is really no problem if you define the two envelopes as differing by a fixed
number, rather than one being a multiple of the other. When the two numbers
have fixed, predetermined values, the two problems are the same.

------
aliston
Interesting... here's my "gut reaction" as a non-statistician. The problem is
that 'A' has a probabilistic component as well. If you let X be the "lesser of
the 2 dollar amounts" and let A = X/2 + 2X, then you come out with equal
expected values for switching and non-switching.

------
praptak
I believe the root of the paradox is that there exists no distribution of two
variables A,B for which:

a) (A,B) always fulfills A=2 _B or A=0.5_ B and b) For any value of A, the
conditional probability of B=2 _A is equal to the conditional probability of
B=0.5_ A.

------
mitko
A friend of mine told me a similar problem that he got at D.E. Shaw interview.

Person A writes random real numbers in two envelopes. Then person B randomly
50/50 picks one envelope and sees the number X written there. Then, B has to
guess whether X is the bigger of the two numbers or not. Show that B has a
strategy, that even if A knows it, B can guess correctly in more than 50% of
the cases.

The mathematical beauty of this problem is that you can derive what are the
non-trivial steps towards the solution.

------
mitko
For me, what is interesting in that problem (besides the obvious flaw that
people here discussed) is that the answer changes is the value A is observed.
I.e. :

If I have two envelopes, one of which has twice the money of the other, you
pick an envelope AND YOU SEE the money inside, then it becomes better to
switch (in expected money).

------
techiferous
I found this paradox so compelling that I blogged about it a while ago. I
tried to tease out the "intuitive" explanation without resorting to much
statistics. <http://techiferous.com/2010/06/the-two-envelope-paradox/>

------
DannoHung
Because you're negating a random choice and it's not like Monty Hall where a
bad choice is removed for you.

~~~
lionhearted
I didn't get that Monty Hall problem for a while after seeing it, but then it
clicked and I got it. It's really beautiful to think about:

<http://en.wikipedia.org/wiki/Monty_Hall_problem>

------
uio
You denote with A two different things, and this is not correct. Suppose you
have a bag with two animals a dog and a fox. You choice an animal A ten times,
and then another animal A ten times, so you have 20 A. The question is: Do you
have twenty dogs or twenty foxs?

------
geuis
Ok, I'm not a huge math wiz and after reading this article 2x, I'm still at a
loss as to what the answer is. So after we have the nice breakdown in the form
of a mathematical proof, we should keep switching envelopes forever? I'm the
kind of guy that isn't as interested as to the joy of solving the problem. I
just want the money!

If I were to break this down into layman's terms, would it be appropriate to
say the following?

"There's equal odds whether you keep the envelope or exchange it for the other
one. There's no difference either way, so flip a coin and decide."

------
drhodes
The example seems to miss something critical. If the dollar amount of the
first envelope is odd, then it can't be the doubled sum...

------
samd
Key takeaway:

 _In the words of [David] Chalmers this is "just another example of a familiar
phenomenon, the strange behaviour of infinity"._

------
jpwagner

      for all $X > p, (price to play)
    
      myLifeQuality(p,X) > myLifeQuality(X,2X) > myLifeQuality(X,X/2)
    

easy

------
klochner
I got that in a tech interview and hadn't seen it before. Dirty.

~~~
dotBen
In what context was the interview (ie type of position).

I'm all for logical problem solving questions in interviews, even ones that
can't be solved but are interesting to see how people tackle them.

However this one doesn't seem to lend itself much to programming or true
problem solving - just probability theorem.

In other words, I'm not sure what useful info I'd get out of a candidate by
asking them to work on this (other than their understanding of probability,
which is not that useful for most programming jobs)

------
pvg
And that's Numberwang!

------
dmor
can't eat your cake and have it too

------
imajes
I think this is better posed as two random-drawn sets of numbers for the first
drawing of an as-of-yet un-played lottery. Then, there's no value in
considering extraneous data such as the mass of the envelopes, looking through
them or other potential evidence.

Instead, you're left with two identical _chances_. Since there is no data to
suggest that either has an advantage over the other, you have a probability of
exactly 0.5 that you have the better chance of winning.

On the question of the switch, it's important to consider that you have no new
data about the one you picked, so the odds are _exactly the same_ as they were
at the beginning: there is no conditionality, nothing has changed since you
picking it.

The probability then of the winning ticket is still 0.5. This is where the
infinite probability problem kicks in.

I don't think this is an actual problem needing to be solved, unless we can
find a way to mathematically express the probability modifiers of our gut
feeling, which is impossible, as it comes into our own personal experiences.

Still, Gladwell might have something to say about it :)

------
richtofen
Trouble is, we are making it a mathematical problem. It need not be. The same
logic applies to human [as opposed to mathematic] problems like: 1\. Don't you
wish your girlfriend was hot like me? 2\. How much money is in the first
envelope, to begin with? Whatever there is, I can get it to work to better use
in my business, than some class 8 formula. 3\. How many birds are there in the
bush? 4\. What if...

All right. Comments notwithstanding. Seriously - how much money?

~~~
richtofen
This was a seriously crass comment. What was I thinking? My apologies fir
trivializing the original inquiry of the thread.

------
risotto
The 'paradox' that is presented leads back to the right conclusion: there is
no way to inform which envelope is better so just pick one at random.

These are non-causal random events, and the probability is always 50/50.

There is a twist to this problem, which is more interesting, where you open
the first envelope, then decide whether or not to switch to the second
envelope.

Believe it or not, there is a trick to beating 50% in this variant.

I'm too tired to try to fully explain it, but this conclusion comes from the
fact that once you look at an envelope that presents information that can be
used for deciding to switch. So it becomes a causal system that does not
follow a random probability density function.

But think: if the envelope you opened had $7 million dollars in it, you
probably wouldn't and shouldn't switch, and you'd probably be able to skew the
odds in your favor on scenarios like this.

