
Tackling Human Biases in Science (2015) - dnetesn
http://nautil.us/issue/54/the-unspoken/the-trouble-with-scientists-rp
======
majos
This article makes good points. For me the most salient is that the current
reward structure ("how many strange new positive results have you published?")
skews published research across almost all scientific fields, and that science
in general needs to find a better way to incentivize the unglamorous work of
reproducing splashy results and useful failures.

The suggested method of pre-registering hypotheses is certainly one way to
address these issues. There are also more complicated methods of attacking
self-fulfilling analyses through clever statistical tools [1], although I
think this technology is not quite ready for wide use.

But I do wish that these articles did a better job of explaining how field
specific the "replication crisis" is. It's a huge problem in psychology (such
that I know ignore pretty much any experimental psych findings) and a bit less
so in medicine. It's far less of a problem in, say, physics or computer
science (recent deep learning experimental explosion notwithstanding) and
maybe not at all the closer those areas get to math. Painting all scientists
with this brush is not entirely fair.

[1] [https://arxiv.org/abs/1411.2664](https://arxiv.org/abs/1411.2664)

~~~
dflkajik
First, the problem is maybe best explained as being a problem with biomedical
sciences (where psychology generally lands in topic analyses). If anything,
the problem is worse in some areas of medicine than psychology. Estimates of
replication rate in psychology vary from 25-78%, and are often around 50%. In
other areas of medicine such as cancer research and pharmacology, the
estimates often have varied from 11-25%. Some estimates of areas of
neuroscience are lower than psychology also.

Although I agree the problems probably vary across disciplines, it's also
important to keep in mind this is in part because of where the flashlight is.
Psychologists are studying this, and to a lesser extent, medical researchers,
so they're kind of scrutinizing what they're familiar with. You allude to comp
sci and AI, but as someone coming to that from statistics, there's a huge lack
of awareness of generalizability of conclusions (tweak on large dataset,
assume the size of the data protects against all), that manifests not so much
as people raising the concern and it being dismissed as "if everyone is
oblivious to it there's no problem." I think it's dangerous to assume that any
area of science is immune.

I also don't think it's really about "math" as much as it is about the
complexity of the domain. There's incredibly complex sophisticated math in
many areas of psychology (many AI models are variants of things routinely done
in psychology), but it doesn't get attention because it's inconsistent with
the assumptions (stereotypes?) of those outside the areas, and those within
the field often take for granted. Similar arguments can be made about various
areas of biomedicine. Science can either cede complex topics to those prefer
demon haunting, or recognize that complex topics will be more challenging than
simpler topics.

I agree that more focus on sequential, adaptive model evaluation paradigms is
needed; they are fascinating in their own right and have broad applicability.

Overall, I see this as reflecting fundamental problems with incentive
structures in academics and research in the moment. I really feel like
academics and research is very broken, and that it's not a statistical problem
as much as it is a problem with culture and economics. I say this as someone
in the field who wants to see it fixed, not as someone who wants to bring it
down. I agree with you completely that there's too much focus on glamour,
fame, and selfish attention than anything else.

------
randcraw
Another reason that negative research will garner little support is the lack
of citations such papers are heir to. If your paper can't appear in a major
journal, then it needs to show that it attracted mind share via a large number
of subsequent citations. But because confirmatory research is generally seen
as a dead end, especially when it's negative, it's unlikely to receive much
attention in subsequent research. All the more reason not to rain on someone
else's parade.

