> I think you haven't thought about this deeply enough yet.
On the contrary, I've thought about it quite deeply. Or at least deeply enough to talk about it in this context.
> You take it as self evident that P(X) = 0.5 is false for that event, but how do you prove that?
By definition a fair coin is one for which P(H) = P(T) = 1/2. See e.g. https://en.wikipedia.org/wiki/Fair_coin. Fair coins flips are also by definition independent, so you have a series of independent Bernoulli trials. So P(H^k) = P(H)^k = 1/2^k. And P(H^k) != 1/2 unless k = 1.
> Assuming you flip a coin and you indeed get 100 heads in a row, does that invalidate the calculated probability? If not, then what would?
Why would that invalidate the calculated probability?
> If not, then what would?
P(X) = 0.5 is a statement about measures on sample spaces. So any proof that P(X) != 0.5 falsifies it.
I think what you're really trying to ask is something more like "is there really any such thing as a fair coin?" If you probe that question far enough you eventually get down to quantum computation.
But there is some good research on coin flipping. You may like Persi Diaconis's work. For example his Numberphile appearance on coin flipping https://www.youtube.com/watch?v=AYnJv68T3MM
You say a fair coin is one where the probability of heads or tails are equal. So let's assume the universe of coins is divided into those which are fair, and those which are not. Now, given a coin, how do we determine it is fair?
If we toss it 100 times and get all heads, do we conclude it is fair or not? I await your answer.
No it's not a tautology... it's a definition of fairness.
> If we toss it 100 times and get all heads, do we conclude it is fair or not?
This is covered in any elementary stats or probability book.
> Now, given a coin, how do we determine it is fair?
I addressed this in my last two paragraphs. There's a literature on it and you may enjoy it. But it's not about whether statistics is falsifiable, it's about the physics of coin tossing.
> This is covered in any elementary stats or probability book.
No, it is really not. That you are avoiding giving me a straightforward answer says a lot. If you mean this:
> So any proof that P(X) != 0.5 falsifies it
Then the fact that we got all heads does not prove P(X) != 0.5. We could get a billions heads and still that is not proof that P(X) != 0.5 (although it is evidence in favor of it).
> I addressed this in my last two paragraphs...
No you did not. Again you are avoiding giving a straightforward answer. That tell me you are aware of the paradox and are simply avoiding grappling with it.
I think ants_everywhere's statement was misinterpreted. I don't think they meant that flipping 100 heads in a row proves the coin is not fair. They meant that if the coin is fair, the chance of flipping heads 100 times in a row is not 50%. (And that is of course true; I'm not really sure it contributes to the discussion, but it's true).
ants_everywhere is also correct that the coin-fairness calculation is something you can find in textbooks. It's example 2.1 in "Data analysis: a bayesian tutorial" by D S Sivia. What it shows is that after many coin flips, the probability for the bias of a coin-flip converges to roughly a gaussian around the observed ratio of heads and tails, where the width of that gaussian narrows as more flips are accumulated. It depends on the prior as well, but with enough flips it will overwhelm any initial prior confidence that the coin was fair.
The probability is nonzero everywhere (except P(H) = 0 and P(H) = 1, assuming both heads and tails were observed at least once), so no particular ratio is ever completely falsified.
Thank you, yes you understood what I was saying :)
> I'm not really sure it contributes to the discussion, but it's true
I guess maybe it doesn't, but the point I was trying to make is the distinction between modeling a problem and statements within the model. The original claim was "my theory is that probability is an ill-defined, unfalsifiable concept."
To me that's a bit like saying the sum of angles in a triangle is an ill-defined, unfalsifiable concept. It's actually well-defined, but it starts to seem poorly defined if we confuse that with the question of whether the universe is Euclidean. So I'm trying to separate the questions of "is this thing well-defined" from "is this empirically the correct model for my problem?"
Sorry, I didn't mean to phrase my comment so harshly! I was just thinking that it's odd to make a claim that sounds so obvious that everyone should agree with it. But really it does make sense to state the obvious just in order to establish common ground, especially when everyone is so confused. (Unfortunately in this case your statement was so obviously true that it wrapped around; everyone apparently thought you must have meant something else, and misinterpreted it).
On the contrary, I've thought about it quite deeply. Or at least deeply enough to talk about it in this context.
> You take it as self evident that P(X) = 0.5 is false for that event, but how do you prove that?
By definition a fair coin is one for which P(H) = P(T) = 1/2. See e.g. https://en.wikipedia.org/wiki/Fair_coin. Fair coins flips are also by definition independent, so you have a series of independent Bernoulli trials. So P(H^k) = P(H)^k = 1/2^k. And P(H^k) != 1/2 unless k = 1.
> Assuming you flip a coin and you indeed get 100 heads in a row, does that invalidate the calculated probability? If not, then what would?
Why would that invalidate the calculated probability?
> If not, then what would?
P(X) = 0.5 is a statement about measures on sample spaces. So any proof that P(X) != 0.5 falsifies it.
I think what you're really trying to ask is something more like "is there really any such thing as a fair coin?" If you probe that question far enough you eventually get down to quantum computation.
But there is some good research on coin flipping. You may like Persi Diaconis's work. For example his Numberphile appearance on coin flipping https://www.youtube.com/watch?v=AYnJv68T3MM