

Plausibility bias? You say that as though that were a bad thing - tokenadult
http://www.sciencebasedmedicine.org/index.php/plausibility_bias/

======
lotharbot
The key observation here is that you're much likely to see false positives
coming from the implausible than from the plausible. This is simple statistics
-- if p=.05, that means there's a 5% chance that "no effect" would be mistaken
for "effect". In cases where there should be no effect (prior probability near
0), 5% of the time you'll get a false positive and almost 0% of the time
you'll get a real positive. In cases where it's plausible there could be a
real effect (say, prior probability 50%), you'll get false positives only 2.5%
of the time (5% of the 50% there isn't an effect) to go with 50% real
positives.

Thus, the argument goes, in order to reduce the ratio of false positives to
real signal, we should require more signal from implausible experiments. That
is, we should make the required p value scale with the prior probability. This
is a form of bias, but it's a very appropriately scientific form of bias --
bias based on evidence rather than error-prone forms of cognition. (This is
simply a statistical reframing of the statement "extraordinary claims require
extraordinary evidence".)

~~~
rfugger
Isn't this the very definition of confirmation bias?

~~~
pradocchia
Others can quibble over definitions, but yes, it's effectively a fudge factor
in favor of whatever we believe _ought_ to be so, and against whatever we
believe _ought not_ be so.

So on one hand, given the nature of statistics, it is necessary to cut down on
the noise. On the other hand, it puts the model before reality, and
necessarily obscures anything unexpected or contradictory.

~~~
lotharbot
It's not a matter of _ought_ versus _ought not_.

It's a matter of _evidence_. If we have strong evidence against hypothesis X,
we should require at least moderately strong evidence for hypothesis X before
we accept it. We can continue to explore it, but we shouldn't give a lot of
weight to weak evidence (such as likely false positives) in the face of
considerably stronger evidence against.

This does, necessarily, obscure certain types of surprising results when the
data supporting them is limited. That's OK. Science sometimes converges
slowly.

~~~
pradocchia
_If we have strong evidence against hypothesis X, we should require at least
moderately strong evidence for hypothesis X before we accept it._

That's a normative position: "We _should_ require..." And a normative position
with an excellent pedigree is still a _normative_ position.

That's not a bad thing, though. Good science requires good judgement. Good
hypotheses do not spring wholly formed from logical deduction, there is
necessarily an element of informed speculation and guess-making. Evidence must
then be interpreted, _weighted_ even, and credibility assigned to third-party
results.

So yes, it is very much a matter of _ought_ and _ought not_.

~~~
lotharbot
but it's not a matter of what we believe _ought_ be so vs _ought not_ be so --
that is, it's not a matter of what we _want_ to be true. It's a matter of the
strength of evidence we _ought_ to have in order to draw conclusions, given
other sets of evidence. It's a normative position, but it's not the _specific_
normative position (based on confirmation bias) you described above.

Think of it as a spectrum. On one end is the pure flake -- always giving too
much weight to the newest evidence, no matter how weak, and therefore swinging
wildly between different beliefs (I've heard this called "regressive bias").
On the other end is the pure dogmatist -- always giving too much weight to
prior evidence, and therefore holding fast to a prior conclusion, ignoring or
dismissing even strong contrary evidence ("confirmation bias"). Somewhere in
the middle is the proper level of evidential weighting -- giving both old and
new evidence the appropriate level of consideration, and therefore changing
beliefs exactly as much as is warranted.

------
tommorris
Science-Based Medicine is a great blog. If you read HN, you should read SBM.
Indeed, you should go read the complete SBM backlog.

