I always find this kind of philosophical thought experiment unsatisfying.
Super-accurate predictions of human behaviour are just not possible. If I could do it, I'd be a gazillionaire philanthropist/playboy dating supermodels and advising heads of state because I can't be bothered to rule the world directly. As it is, I can't do better than a draw against a 5-year-old at rock-paper-scissors.
So this paradox tells us more about psychology than philosophy. Folks who think "A and B" is the right answer basically ignore the bit about the predictor never (or almost never) being wrong and go with a strategy that is great for fallible human predictors.
And well they should. The only thing more ridiculous than an infallible predictor is one that wastes his time playing shell games where the best he can do is break even.
The player is the AI, and the predictor is the AI programmer.
The AI programmer can look into the innards of the AI ie. the source code and can thus predict with high accuracy what the AI will do.
What is a winning strategy for the AI?
Or taken another way, you can have AI's that have access to each other's source code and are competing for some scarce resource, how do you design an AI that 'wins' when it's behaviour is known it's opponent?
Sure you can. The AIs behaviour could be as simple as "Always pick box B".
The difficult bit would be designing an AI which, given perfect knowledge of its logic, would pick both boxes despite appearing to be more likely to pick only B.
In that case, you could simply have an AI with a 0.499999 chance of picking both and a 0.5000001 chance of picking B. The expected winnings would be $1,000,500.
But then, once it comes down to probability, the predictor is no longer a 'perfect predictor' any more.
That would probably be counted as a random choice and as the rules state:
> if the Predictor predicts that the player will choose randomly, then box B will contain nothing.
I don't think the schism is because of fallible vs infallible so much as people being stuck in naive decision theory (just take both boxes; it's already decided beforehand!) or not (hey, the kinds of decisions I'm willing to make might have changed what Omega decided in the past!).
After all, Omega need not be infallible. So long as he predicts your decision with an accuracy of 50.05% (slightly better than a coin toss), you profit:
---
Let p be the probability that Omega predicts your decision correctly.
E(one-boxing) = p⋅$1mil + (1-p)⋅0
E(two-boxing) = (1-p)⋅$1.01mil + p⋅$1k
Solving for E(one-boxing) > E(two-boxing)
p⋅$1mil > (1-p)⋅$1.01mil + p⋅$1k
p($1mil + $1.01mil - $1k) > $1.01mil
p⋅2mil > $1.01mil
p > 50.05%
---
And if he doesn't predict you slightly better than a coin toss, why is he called the Predictor?
It's unsatisfying because it's so poorly defined. As you attempt to define it more precisely, the problem just converges on the problem of whether free will exists.
Super-accurate predictions of human behaviour are just not possible. If I could do it, I'd be a gazillionaire philanthropist/playboy dating supermodels and advising heads of state because I can't be bothered to rule the world directly. As it is, I can't do better than a draw against a 5-year-old at rock-paper-scissors.
So this paradox tells us more about psychology than philosophy. Folks who think "A and B" is the right answer basically ignore the bit about the predictor never (or almost never) being wrong and go with a strategy that is great for fallible human predictors.
And well they should. The only thing more ridiculous than an infallible predictor is one that wastes his time playing shell games where the best he can do is break even.