
Is there a publication bias in behavioral oxytocin research on humans? [pdf] - gwern
http://www.gwern.net/docs/statistics/2016-lane.pdf
======
mherdeg
Just taking this quick opportunity to post a link to my favorite journal, the
Journal of Articles in Support of the Null Hypothesis:
[http://www.jasnh.com/](http://www.jasnh.com/) .

JASNH is designed to accept papers that say "we tested for X effect on Y and
found no evidence that X affects Y". Traditionally instead of publishing this
kind of research people would keep changing parameters around until they got
some kind of positive and presumably career-advancing result.

I am a huge fan of JASNH because of the reminder that "falsifiability" can and
should happen.

They seem to be mostly getting behavioral-science style submissions but I am
pretty sure based on their charter they would take anything.

~~~
gohrt
JASNH is cool, but it's not about falsifiability. JASNH, as it says right in
the name, is about _not_ falsifying the null hypothesis. The other journals
are full of articles falsifying (null) hypotheses.

~~~
Bromskloss
Not falsifying the null hypothesis, but falsifying _other_ claims.

------
RcouF1uZ4gsC
This is super important for science. If we do not know the number of
unpublished negative studies - we cannot be certain that the published
positive studies actually mean anything and we have the problem illustrated by
this xkcd: [https://xkcd.com/882/](https://xkcd.com/882/)

------
bunderbunder
I think we may have found a situation where the inverse of Betteridge's Law is
more appropriate.

------
verytrivial
The title, while accurate, is perhaps unintentionally click-bait-y.

The article includes a less obtuse alternate sub-title: "Is there a
publication bias in behavioral intranasal oxytocin research on humans?"

If I'd seen _that_ I probably would not have clicked it to find out what on
Earth a "file drawer problem" was.

~~~
deelowe
The file drawer problem has been getting discussed quite a bit recently. There
was an article that I read from month or so ago that specifically mentioned
this being an issue across many fields of research with medical being the
largest concern. There's a bit of a movement growing to try and address this
throughout the scientific community. The tendency to only publish positive
results has a chilling effect that could cause great harm to society.

~~~
verytrivial
I agree, though I think the "file drawer problem" is still jargon and not
quite a catch phrase (yet).

~~~
apathy
Publication bias is a substantially more descriptive and accurate phrase. Even
when you do submit studies with negative results for publication, many
journals simply will not accept them. Little to do with file drawers and much
to do with editors.

That said, power and experimental design -- can we confidently reject the
hypothesis that an effect size of at least X is present between groups A and B
(or models A and B) at some specified error bound? -- cannot be ignored.
Bayesian or frequentist, at some point you have to conclude the experiment or
write an interim report, and this involves making a decision in the face of
uncertainty.

Being a statistician means never having to say you're certain. It also means
never being able to do so (unless you're an irresponsible asshole). You can
declare a decision boundary and, based on that, evaluate the evidence. In
light of this, it is important that negative studies not have inferior designs
or sample sizes to the positive studies, and vice versa.

In humans, it is exceedingly difficult to control for all of the factors you
might want to. Nevertheless, the tendency of journals and PR outlets to favor
sexy outliers over the weight of evidence is a huge source of this bias, which
cannot be solely ascribed to unwillingness of researchers to submit the works
for publication. (It is also why preprints are so valuable and their uptake so
important.)

