
Reproducibility Crisis: The Plot Thickens - DiabloD3
http://blogs.discovermagazine.com/neuroskeptic/2015/11/10/reproducibility-crisis-the-plot-thickens/
======
danieltillett
From my perspective as a former academic there is no simple solution here. We
have created a system where the pressure to publish first and ask questions
later is totally overwhelming. This is a cancer within science that will
ultimately destroy public support for funding scientific research.

If we are going to solve this problem we need to reduce the pressure on
scientists to publish at all costs. This publish-or-die thinking is driven by
the extreme competition for grants and tenure tracked positions. Given I think
it unlikely that the amount of money available for research is going to
quadruple anytime soon (this is about the amount of money we would need to put
into the system to get rid of the publish-at-all-costs mentality) what can be
done?

1\. Break the link between publication and grant funding. We could move to a
system where once you had proved that you could do research (say published a
few papers) and could come up with a viable research idea that was doable
assessed by a simple grant proposal, then you go into a pool and the grants
allocated by lottery. We should aim to have 50% to 75% of all scientists in
this pool. We then fund as many grants as we can given the money available.

2\. Spend some of the current research money on long-term fellowships (say 10
years) that are awarded in the same lottery way as grants.

3\. Drastically reduce PhD student training to get supply and demand back into
line. e.g. make it such that a supervisor can not supervise more than one
student at a time.

~~~
ars
> We have created a system where the pressure to publish first and ask
> questions later is totally overwhelming.

Why can't people simply publish negative results?

> Break the link between publication and grant funding.

Same answer - in fact give extra priority to those who publish lots of
negative results.

~~~
IshKebab
> Why can't people simply publish negative results?

Mainly because it's not as interesting. For every positive result researchers
find there are dozens of things they test that just don't work.

Imagine if the _first_ test of "romantic priming" didn't work. Would the
researchers really bother trying to publish "Hypothesised romantic priming
effect was just a wild hunch after all."? And would the journal accept it? No
way.

Of course if you get a negative effect for an established "fact", as in this
case it _can_ be published. But you have to be really really sure otherwise
the reviewers will just say you did your experiment wrong.

~~~
iamsohungry
> Mainly because it's not as interesting. For every positive result
> researchers find there are dozens of things they test that just don't work.

That seems to be the core of the problem. Reality doesn't have a bias toward
results that are interesting, humans do.

> Imagine if the first test of "romantic priming" didn't work. Would the
> researchers really bother trying to publish "Hypothesised romantic priming
> effect was just a wild hunch after all."? And would the journal accept it?
> No way.

So maybe they should.

------
cLeEOGPw
Need to start incentivising negative results, reproducing results and even
finding no results at all. It should be possible to publish to some kind of
database a data analysis with methods and data even if you don't find
anything, because then maybe other scientists could look at it and say "ok,
this seem to not yield any result, but I could try to change it" adn he either
finds it ant publishes, or doesn't find and publish anyway, so we start to get
a collection of results and be much more skeptical when one of 8 studies, for
example, finds effect of p barely over 0.005.

~~~
bambax
Yes. If journals are biased against negative results, then governments should
move in. Why do we need "journals" with the implication of a limited page
count per issue anyway; in this digital age a database or wikipedia-style
format should be able to host any number of results.

And it should not mater that some governments won't fund suchan initiative:
all is needed is one, or a single benefactor, or maybe a tv-show à la
Mythbusters. ResearchBusters would be a hell of a show!!

~~~
thinkling
I recommend reading responses to other branches of the grandparent that
address this.

I agree that if people do decent studies and get null results, it would be
nice if the data became publicly available so further studies can be seen in
the context of previous negative results.

But given the need for novelty in results, I wonder how much work actually
duplicates work previously undertaken with a null result. I suspect it's much
less than many people in this discussion are assuming.

And secondly, there's the issue of forcing people to do decent write-ups.
Study results are useless without a very detailed description of both the
experimental setup and the data collection/analysis procedures that were
followed. Those write-ups take time and effort, and for an unpublishable
result many researchers won't find the motivation. One could argue that
funding agencies and institutional IRBs must force researchers to write much
of this ahead of running the experiment. That might help a bit, but will still
leave substantial holes.

------
api
This is a direct result of "publish or perish," which basically gamifies
academic advancement as a function of churning out publications. You get what
you incentivize, and we wanted papers... we didn't say anything about correct
reproducible results.

~~~
WalterSear
And reform will likewise be pursued as fast as congress reforms campaign
finance. To many people have their livelihood invested in the castles they've
built in the sky.

~~~
patkai
"their livelihood invested" \- somebody said that "you can't be a true
scientist if you don't rejoice when your model fails". As you suggest, a
researcher today can't afford that. The best of science is done when somebody
is "playing" curiously and without being invested in the expected results. In
today's academia you just can't do that, or very rarely.

------
Estragon
Man, that is a such a beautiful result. Ugly implications, but I love the
analysis.

------
brownbat
There should be another level of blinding: one team writes the study design,
then sends the design off to a separate neutral group of practitioners who
implement the study and report all results, regardless of outcome.

It'd be a much more expensive way to do science. But if cost matters and
avoiding meaningless results doesn't, we shouldn't run studies, we should just
flip a coin on every question and save the hassle of pretending.

~~~
iamsohungry
That would likely be completely counterproductive. There's no such thing as a
separate neutral group of practitioners. More likely what will happen is it
will have a chilling effect on what science gets done. The Kinsey studies, for
example, got as far as they did largely because they were under the radar for
a long time. Imagine if, instead, Kinsey sent his study design off to a
"separate neutral group of practitioners". They would likely have exposed the
nature of the study and Kinsey would likely have been fired.

~~~
brownbat
> They would likely have exposed the nature of the study and Kinsey would
> likely have been fired.

If you mean because the scientific community was reactionary and would have
run him out, then that's a criticism of attitudes in the scientific community,
not of the usefulness of extra layers of blind in studies.

Even so though, your characterization of the scientific community is a bit
hyperbolic. Note that Kinsey gave many open discussions of his research area
as early as 1935, and yet somehow retained his position in "swinging" Indiana.

> There's no such thing as a separate neutral group of practitioners.

I doubt that's knowable.

There are helpful degrees of neutrality, though, and you can eliminate certain
specific sources of bias as they are identified. If there was no way to derive
any objective results at all, then it wouldn't matter if we just flipped
coins, because no studies would be worth anything.

Self-run studies are biased because significant results further a researcher's
career. If you have some other team run studies without knowing the author,
that team's bias is to establish its own credibility, which is orthogonal to
just 'manufacture significant results.'

------
mathgenius
Could this also be explained by the "influence" experimenters had on their
subjects? By influence I am supposing some kind of subtle cues... body
language, or whatever. It also makes some sense that a smaller sample size
(group of people) would be easier to influence.

------
chris_wot
I totally want proof that their paper is reproducible.

------
imperio59
Psychology is not a science. Psychologists are not taught statistics in their
curriculum. Why did anyone expect that they would come up with sound,
statistically accurate results?

You can read any two psychology "textbooks" and get two differing opinions
about how to address the same issues...

It's not a science folks. It's never been. It's always been an opinion-based
practice, and this just proves the fraud being perpetrated on the people of
this earth who trust these charlatans for help on matters ranging from
personal mental health all the way to court "expert" testimony that can swing
the decision of a judge or a jury one way or the other...

~~~
qume
This reply is to anyone reading this that takes the parent post seriously -
the scientific method is the best tool we have for gaining knowledge, and it
is effective especially with difficult subject matter such as psychology.
Psychology isn't just science, it's difficult science.

BTW the only statistics I know are what I was forced to learn studying psych
(although I was a computer science major, which required none)

~~~
kwhitefoot
How can you do computer science without any stats? Don't you study machine
learning?

~~~
seanmcdirmid
Many of us are kind of old. Machine learning wasn't much of a thing in the
90s, and I think has only become popular in the last few years.

