Where did I say publication is a claim of conclusive proof?
I said they published papers in which they claimed they conclusively proved something, and it turned out they didn't conclusively prove anything. Specifically, because their results couldn't be reproduced.
In case you're not familiar with how experiments are carried out in natural sciences, "results couldn't be reproduced" means that
1. They claimed they got <these results> with p < <this threshold>
2. Some other guys repeated the same experiment ("repeated" as in they administered the same substances, to a sample of equal size under similar conditions and measured the same parameters under similar conditions) and it turned out that on their results, p was through the roof.
In some cases, that was simply because the authors didn't publish enough information for their experiments to be repeated (I was close to making that mistake, too. Thank God for review committees). But in most cases, that simply happened because authors cherry-picked data or "optimistically" interpreted results.
(Edit: Responsible review committees can sometimes spot the latter, but it's very hard to deal with the former. The correct thing to do is to have all researchers publish all their experimental data, even the one which wasn't included in the papers. A lot of researchers agree, but you'll find that a lot of companies that employ researchers actively invent reasons why their researchers shouldn't do that.)
> If you set your p-value threshold at .05, then one in twenty experiments will produce a false positive.
> The reason is simple: given a p-threshold of .05, one in five experiments will yield a false positive.
>Where did I say publication is a claim of conclusive proof?
Exactly where you typed It means that a quarter of the published papers say "Hey, we did this experiment which offers conclusive proof of X"
Again, this is patently false because they did not publish papers claiming conclusive proof. They published papers claiming evidence in favor of a theory.
>Make up your mind already.
There's no reason to be disrespectful over a mistake. I meant 1 in 20 (5%, hence the mix-up).
Returning to the point, it takes incredible mental gymnastics to argue that a false positive automatically degrades the status of a study from "scientific" to "unscientific":
1. The adjective "scientific" describes a method, not a result. Those speaking of "scientific results" are either (a) referring to "results of a scientific study" or (b) confused about what science is (namely: a method, not a result).
2. A false positive degrades the status of a result (not a study) from "evidence in favor of X" to "not evidence in favor of X".
> Returning to the point, it takes incredible mental gymnastics to argue that a false positive automatically degrades the status of a study from "scientific" to "unscientific"
No one said anything about A false positive!
"Cannot be reproduced" means there were a lot of false positives. So many, in fact, that you can't really draw any conclusion from the experiment. (Edit:) Or more to the point, that the p value the original authors claimed was bullshit.
Reproducing an experiment means reproducing both the experimental technique and the sample.
I said they published papers in which they claimed they conclusively proved something, and it turned out they didn't conclusively prove anything. Specifically, because their results couldn't be reproduced.
In case you're not familiar with how experiments are carried out in natural sciences, "results couldn't be reproduced" means that
1. They claimed they got <these results> with p < <this threshold>
2. Some other guys repeated the same experiment ("repeated" as in they administered the same substances, to a sample of equal size under similar conditions and measured the same parameters under similar conditions) and it turned out that on their results, p was through the roof.
In some cases, that was simply because the authors didn't publish enough information for their experiments to be repeated (I was close to making that mistake, too. Thank God for review committees). But in most cases, that simply happened because authors cherry-picked data or "optimistically" interpreted results.
(Edit: Responsible review committees can sometimes spot the latter, but it's very hard to deal with the former. The correct thing to do is to have all researchers publish all their experimental data, even the one which wasn't included in the papers. A lot of researchers agree, but you'll find that a lot of companies that employ researchers actively invent reasons why their researchers shouldn't do that.)
> If you set your p-value threshold at .05, then one in twenty experiments will produce a false positive.
> The reason is simple: given a p-threshold of .05, one in five experiments will yield a false positive.
Make up your mind already.