I thought as you think. I wrote a simulation to show I was right. I was wrong.
I have had this interchange several times. Invariably it goes one of two ways. They have endless reasons why they have to be right and they don't need to write a goddamn simulation, or they tell me they wrote the simulation and they have learned they were wrong.
I think you’ve simply misread what they are saying. They are saying that the situation in which Monty opens a door and it reveals the grand prize, are the set of cases that we do not care about because we have already lost, and can thus discard them.
This is basically just a different way of saying that Monty looks behind the door to be sure to only reveal goats.
Yes, the probability of wins will be different in these two scenarios, but it doesn’t affect the conclusion: when Monty reveals a goat, you should always switch.
Edit: if you believe I am the confused one after this comment, I will go make the simulation as you suggest.
That simulates a different scenario than what’s being described here; it simulates a situation where we count the times when Monty picks the car. But those situations are irrelevant because they have no bearing on the fundamental question of whether or not to switch doors when Monty shows you a goat. Effectively, we discard all outcomes when Monty shows you the car, making it an identical “game” as when Monty simply does not ever choose the car.
As I said, the outcome per game will be different (since you suddenly have an additional opportunity to lose), but the math around whether or not to switch when shown a goat remains unchanged.
It will be different. The question comes down to P(winning by switching | monty reveals a goat). The key difference is whether P(mony reveals a goat | you chose a goat door the first time) is 50% or 100% (if you choose the car, there's a 100% chance he reveals a goat in either case). Since 'winning by switching' is the same as 'choosing a goat door first', you can apply bayes theorem to see how the results change.
To put it another way, if Monty is choosing randomly, half the time where you would win by switching, instead the game just ends/isn't counted, but the same is not true of the case where you win by not switching. From a bayesian point of view, Monty randomly revealing a goat should increase your belief that you picked the car the first time.
That said, switching is not worse than not switching unless Monty is biased towards revealing the car instead.
So let's say you and I sit down, and do the following:
1) I roll a 3-sided die (or a 6 sided, wrapping) and keep it covered.
2) You pick a number, 1-3
3) I flip a coin. If it's heads, I pick the lower available number; if it's tails I pick the higher.
4) I peak at the die. If it's my number, we reveal and start over.
5) I (always, at this point) offer you a wager: if the die shows your number (so you would have lost if you switch), you pay me $7; if the die doesn't show your number, I pay you $5.
Assuming you believe the die and coin are fair, etc, would you agree to play that game 1000 times? In those games, is there a reason you would turn down the wager?
Is it the same game if I flip the coin secretly, peek at the die, announce my number (picked algorithmically in the obvious way), and then pay out according to whether switching would win (as above)?
Because (assuming I've explained these games as I intend... it's getting late) I would play the former with you not the latter (but I would play the latter if we switched the payments around - I chose 5 and 7 because 7/12 is halfway between 1/2 and 2/3).
Thank you (and everyone else in this thread) for sticking with it with me. It was a struggle to read that single line of code on a cell-phone screen, and that was compounded by the fact that it turns out that I don't understand the Monty Hall problem when I thought that I did. I'm going to sit with this one for awhile.
I thought as you think. I wrote a simulation to show I was right. I was wrong.
I have had this interchange several times. Invariably it goes one of two ways. They have endless reasons why they have to be right and they don't need to write a goddamn simulation, or they tell me they wrote the simulation and they have learned they were wrong.