You're using a mathematical model that doesn't apply. The ahead-of-time simulations invalidate the idea that your decision can't affect the outcome, despite the final decision ultimately occurring afterwards.
An analogy would be asserting that you can't possibly shoot yourself in the back of the head when firing into the distance, and sticking to that position even after finding out you're in a pac-man-style loop-around world.
A much closer but more technical analogy is that you can't solve imperfect information games by recursively solving subtrees in isolation. Optimal play can involve purposefully losing in some subtrees, so that bluffs are more effective in other subtrees.
The fact that you are doing worse by two-boxing, leaving with a thousand dollars instead of a million, despite following logic that's supposed to maximize how well you do, should be a huge red flag.
How do "the ahead-of-time simulations invalidate the idea that your decision can't affect the outcome, despite the final decision ultimately occurring afterwards?" They're only simulations. The predictor is defined as being very likely to have correct predictions; it's not defined as God or a time traveler or an omniscient computer with knowledge of the universe's intricate workings.
The fact that to an observer who knows the contents of the boxes (say, the moderator or an audience) you always look like an idiot for taking only one box and leaving money on the table, should be a huge red flag.
But that's the thing you assume there something different about you and simulated you where in theory there might not be.
In other words if your hard coding something then hard coding pick B gets you 1,000,000 but hard coding pick AB gets you 1,000 assuming the predictor looks at your source code.
As to the game show you would gave a series of people where people who pick B get 1,000,000 and people who pick AB get 1,000 now which group looks like idiots?
Edit: Depending on the accuracy of the predictions, it's less about information traveling into the past as it is being the type of person that chooses B.
I don't know why you're talking about hard-coding and simulation and whatnot. The mechanism that the predictor uses is completely irrelevant and specifically defined to be unknown in the thought experiment description, aside from it disallowing backwards causality and things like time travel.
Every single person who picked only box B left $1000 on the table. That's a bare fact. You don't even need to know or care what the prediction is to know that.
In general when someone leaves $1000 that they could have had, no strings attached, that's a less desirable outcome than the one where they had the extra $1000.
Your assuming it's impossible to accurately predict what someone would chose when it's directly stated they can.
If your the kind of person that picks AB then you get 1,000.
If your the kind of person that picks B you get 1,000,000.
There are no other options.
PS: Consider, the 'prediction' is having you walk on stage and given the choice a random number of times greater than 20. Except, one of them will be randomly the one that counts.
I'm not saying it's impossible to predict anything. I'm saying that people whose choose box B are always choosing the inferior of the two options available to them, because the money is already on the table and no one is going to change that configuration based on the person's choice (stated in the problem).
As I have said, the prediction method or accuracy is largely irrelevant to the actual paradox, aside from a means to incentivize people to behave in an obviously irrational way :).
(I don't really think that, the other principle of decision for one-boxers is induction based on prior observations. The whole point of the paradox is neither side has a decisive argument against the other. The important point here is that free will/determinism, possibility of perfect simulation, etc. are not part of the problem this paradox is intended to illuminate.)
"I'm saying that people whose choose box B are always choosing the inferior of the two options available to them"
Except there are two occasions to chose B. On is on the stage and the other is as part of the model the predictor uses. And in that case you really want to be modeled as someone that chooses B.
In the end what happens on stage is irrelevant as 99.9% of the value comes from how your modeled .1% comes from what you do on stage. So, how do you get modeled as someone that chooses B?
Well, if their accurate the only way to influence that prediction is choosing B on stage.
And yes, with accurate modeling information can travel backward in time. Just consider people taking an umbrella to work because of a weather prediction. In this case the rain caused people to bring an umbrella before it happened.
Now, you can argue that picking AB is the rational choice, but if it consistently get's a worse outcome then it's irrational behavior. What makes it irrational? The assumption it can't influence what's under the table.
PS: The only counter argument is you have 'free will' and thus your choices can't be accurately modeled.
> And yes, with accurate modeling information can travel backward in time. Just consider people taking an umbrella to work because of a weather prediction. In this case the rain caused people to bring an umbrella before it happened.
The rain didn't cause this; ️the prediction of rain did. Comments like this, and your strange focus on simulation and modeling, lead me to believe that you are a little out of your element here. The questions raised and the paradox regarding choice are present no matter what the predictor's mechanism is, whether it is a perfect simulation or psychic connection with your mind, or messages from God.
Rain has no free will. In the face of a completely accurate prediction neither do you. And without free will the decision has already been made before you where on the stage. Even if you where not aware that you had made the choice otherwise you could not be 100% accurately modeled.
PS: The implications of not having free will are uncomfortable, but they directly fall out of having a completely accurate predictor. (And yes, this is often weakened to a semi accurate predictor.)
The rain could not have caused people to bring an umbrella, because people brought an umbrella before it rained. Regardless of whether or not the universe can unfold in any other way than the way it does, something cannot be caused by another thing that occurred after it. It's in the definition of "cause and effect."
Also, given that the entire point of the paradox is to illustrate a problem in decision theory, it seems a particular waste of time to deny that anything has a decision. Read the original statement of the problem. Read it closely. Don't read junk on the Internet or jabbering by Christian apologists desperate for credentials. The problem has absolutely nothing to do with free will vs. determinism.
What do you think is the point of it's not about free will? The only paradox is the assumption that you can make a choice that's not predictable. But, if conditions such that there will be rain or conditions such that you will pick AB exist then you will pick AB.
Sure, if you can lie to the oracle and say you’re going to pick B and actually pick AB then clearly that’s the better option, but if they can look past that lie and see how you think (aka read your source code) then that’s not a viable option. If you say to the oracle I am going to pick B because you know what I am going to do and something predictable changings you’re mind you still lose. The only option is to pick B and for that to be truth, and if it’s the truth you pick B on stage.
PS: As apologists, you seem to be stuck with the idea that thought is anything other than a predictable electro chemical process in your brain no different than a complex computer program. We can make pseudo random choices which are very useful in decision theory, but ‘free will’ does not exist. In the end we are no less predictable than the rain.
The predictability or non-predictability of a given decision is irrelevant; there's no need to assume that an unpredictable choice can be made. Choosing both boxes always gets the maximum amount of money available on the table.
The point is about decision theory, which has two approaches considered "rational" that yield different results. That's why it's a paradox. It's all spelled out in the paper: http://faculty.arts.ubc.ca/rjohns/nozick_newcomb.pdf
Go ahead, search the document for the phrases "free will" or "determinism." I'll wait.
An analogy would be asserting that you can't possibly shoot yourself in the back of the head when firing into the distance, and sticking to that position even after finding out you're in a pac-man-style loop-around world.
A much closer but more technical analogy is that you can't solve imperfect information games by recursively solving subtrees in isolation. Optimal play can involve purposefully losing in some subtrees, so that bluffs are more effective in other subtrees.
The fact that you are doing worse by two-boxing, leaving with a thousand dollars instead of a million, despite following logic that's supposed to maximize how well you do, should be a huge red flag.