
The Axiom of Choice Is Wrong (2007) - monort
https://cornellmath.wordpress.com/2007/09/13/the-axiom-of-choice-is-wrong/
======
Ended
A related, and to my mind even more counter-intuitive result, is a strategy
for predicting f(x) given only the values {f(y), y<x}, where f is any
arbitrary function on the reals.

[https://www.math.upenn.edu/~ted/203S10/References/peculiar.p...](https://www.math.upenn.edu/~ted/203S10/References/peculiar.pdf)
[PDF]

------
IsaacL
I only have a fuzzy idea of what they're talking about here, but Terence Tao
had an interesting reply:

"This paradox is actually very similar to Banach-Tarski, but involves a
violation of additivity of probability rather than additivity of volume.

Consider the case of a finite number N of prisoners, with each hat being
assigned independently at random. Your intuition in this case is correct: each
prisoner has only a 50% chance of going free. If we sum this probability over
all the prisoners and use Fubini’s theorem, we conclude that the expected
number of prisoners that go free is N/2\. So we cannot pull off a trick of the
sort described above.

If we have an infinite number of prisoners, with the hats assigned randomly
(thus, we are working on the Bernoulli space {\Bbb Z}_2^{\Bbb N}), and one
uses the strategy coming from the axiom of choice, then the event E_j that the
j^th prisoner does not go free is not measurable, but formally has probability
1/2 in the sense that E_j and its translate E_j + e_j partition {\Bbb
Z}_2^{\Bbb N} where e_j is the j^th basis element, or in more prosaic
language, if the j^th prisoner’s hat gets switched, this flips whether the
prisoner gets to go free or not. The “paradox” is the fact that while the E_j
all seem to have probability 1/2, each element of the event space lies in only
finitely many of the E_j. This can be seen to violate Fubini’s theorem – if
the E_j are all measurable. Of course, the E_j are not measurable, and so
one’s intuition on probability should not be trusted here.

There is a way to rephrase the paradox in which the axiom of choice is
eliminated, and the difficulty is then shifted to the construction of product
measure. Suppose the warden can only assign a finite number of black hats, but
is otherwise unconstrained. The warden therefore picks a configuration
“uniformly at random” among all the configurations with finitely many black
hats (I’ll come back to this later). Then, one can again argue that each
prisoner has only a 50% chance of guessing his or her own hat correctly, even
if the prisoner gets to see all other hats, since both remaining
configurations are possible and thus “equally likely”. But, of course, if
everybody guesses white, then all but finitely many go free. Here, the
difficulty is that the group \lim_{n \to \infty} {\Bbb Z}_2^n is not compact
and so does not support a normalised Haar measure. (The problem here is
similar to the two envelopes problem, which is again caused by a lack of a
normalised Haar measure.)"

I found thinking about the two envelopes problem he mentions
([https://en.wikipedia.org/wiki/Two_envelopes_problem](https://en.wikipedia.org/wiki/Two_envelopes_problem))
a more accessible way to wrap one's head around the paradoxes that arise when
you compare a (potential) infinite quantity to a (known) finite quantity.

(The crux of the envelope paradox is that each envelope has an infinite
expected value, but in reality only contains a finite amount. I can sorta-
kinda see how the infinite hats problem is another example of the same
paradox).

