

On the hazards of significance testing: the false discovery rate - jonathansizz
http://www.dcscience.net/2014/03/24/on-the-hazards-of-significance-testing-part-2-the-false-discovery-rate-or-how-not-to-make-a-fool-of-yourself-with-p-values/

======
sukilot
In short, p=0.05 means that 5% of no-effect experiments show a false positive.

But due to publication bias, negatives are ignored. When there are lots of
experiments looking for mostly non-existent effects , the false positives
appear ever more commonly than true positives.

This is why Bayesians insist on choosing a prior probability. Without it,
there is no way to interpret the result of an experiment.

The takeaway lesson here, which is horribly underemphasized in school , is
that statisitics never answers yes/no questions! Statistics can only give you
a _function_ for combining a Prior with an Experiment to obtain a Posterior.
Without a Prior (which is subjective; it can never be chosen objectively!),
you cannot get a Posterior Probability out of a experiment.

