
The foot soldiers behind psychology’s replication crisis - sampo
https://www.chronicle.com/article/I-Want-to-Burn-Things-to/244488?key=ONA-J8qTe05O7njbTd0tJ5MGlT3EF5H5UcVzn-A0SjvQuzkOG60mekK3-jrAePM-N1hXSXktZXhZb2x6RVdBSDZBTVJHZHFYRVVPOVV4Z0tjRW9RbWlmcVlqMA
======
morley
Is there a list of well-known studies that have been overturned because they
haven't been reproducible? I'm wondering how much "common knowledge" that has
come from those studies is actually true or not.

~~~
nonbel
There are, but the "replication studies" tend to change little details for
whatever reason so they aren't really checking for reproduciblity anyway.

I don't mean details like where/when the study was performed, because
obviously the original authors thought it would generalize beyond their exact
sample. I mean they will change the methods to ask different questions on a
survey, etc.

Also, they use statistical significance vs not to determine if the same result
was observed (rather than having quantitatively similar results)... The entire
situation is ridiculous.

You can start here though:
[https://en.wikipedia.org/wiki/Reproducibility_Project](https://en.wikipedia.org/wiki/Reproducibility_Project)

------
acover
Ranking of universities by replicability

[https://replicationindex.wordpress.com/2016/03/02/replicabil...](https://replicationindex.wordpress.com/2016/03/02/replicability-
ranking-of-psychology-departments/)

------
mcguire
" _One evening at the SIPS conference, after sessions had been concluded and
beers consumed, a researcher asked Heathers for his opinion of a study that
seemed suspicious. When Heathers decided he’d heard enough to render a
verdict, he took a few steps back and began to shout._

" _" Give me a B!"_

" _" B!" the assembled scientists replied._

" _" Give me a U!"_"

And here we see how more bullshit gets started. Deciding that something sounds
bad means it is wrong is exactly the same error as deciding that something
sounds good means it is right.

~~~
jcranmer
> Deciding that something sounds bad means it is wrong is exactly the same
> error as deciding that something sounds good means it is right.

False equivalence. A correct paper inherently has no flaws, while a wrong
paper inherently has at least one flaw. It is very possible to be able to
immediately detect some classes of flaws, and that in no way means that one
can immediately find all of the flaws.

When I was a TA, I found several people who cheated. One person I discovered
because there's no way a struggling student in an introductory C course is
going to pull out code with gotos and register keyword usage plastered
everywhere. Do I claim to have caught everyone who cheated? Hell no.

~~~
eslaught
I don't know if you've ever been subject to scientific peer review or not. One
of the most frustrating things is reviewers who misunderstand the paper, but
are convinced they've found some sort of problem with it. Admittedly, these
sorts of issues usually point to places where the paper could be more clear.
However, the point still stands that it is far easier to misunderstand
something, than it is to understand it and find a flaw.

I would be very, very suspicious of any scientist claiming to have found a
flaw in a work they've only been given a superficial description of. It may be
accurate to say they have a hunch of where a flaw might be. But without really
spending time with a paper it would foolish to claim any degree of certainty.
Only very low quality papers can be rejected so quickly.

~~~
TangoTrotFox
There was an interesting study recently that showed prediction markets
actually did a better job of determining 'fake' papers than journals in social
sciences. [1][2] A group of researchers took some 21 papers published in
Nature and Science from 2010 to 2015. 13 were able to be replicated. The
prediction market accurately determined all 13 predictable papers, and 5 of
the 'fake' papers. They gave the remaining 3 about 50/50 odds. Though one
thing to be said there is that even in studies that they managed to replicate,
the effect size was found to be, on average, 50% of the stated effect size.

I think there's long been a perception that social sciences are heavily
influenced by people who take whatever their biases are, create some
experiment specifically designed to show them, and then play with the numbers
or experiment's parameters until they manage to do so. This goes all the way
back to (and certainly before) Zimbardo's Stanford Prison Experiment. There
was nothing organic there. Participants, both prisoner and guard, were heavily
coached on how to act and, in their own words, saw the experiment more as an
acting role than emergent normal behavior. It seems to be that this perception
is accurate.

In a society where people are increasingly radicalizing on social views, we
ought expect social sciences to become even more dysfunctional in the years to
come. This sort of stuff is, in turn, casting a very dangerous cloud over the
rest of science since people tend to extrapolate these actions and behaviors
in the social sciences, to science as a whole. In my opinion we need to start
creating a strong distinction between science driven by science that yields
falsifiability, predictability, and is driven exclusively by _direct_
experimental result -- as compared to not-quite-science that is based on
models, abstract experimentation, is not falsifiable, and does not provide
meaningful predictions. What I mean by meaningful is that the whole point of
predictability is not to have something to encourage political action on as is
often the case in social science, but to use as a litmus test for the accuracy
of a hypothesis. If it's true then that provides strength to the hypothesis,
if it's false then the hypothesis is false. Without falsifiability,
predictions are worthless.

[1] -
[https://www.bloomberg.com/view/articles/2018-08-30/predictio...](https://www.bloomberg.com/view/articles/2018-08-30/prediction-
markets-seem-to-assess-studies-better-than-peer-review)

[2] -
[https://www.nature.com/articles/s41562-018-0399-z](https://www.nature.com/articles/s41562-018-0399-z)

~~~
sievebrain
The replication crisis is larger than the social "sciences". Microbiology is
also affected, and I've even seen evidence that occasionally computer science
is affected, though to nowhere near the same extent and for different reasons.

------
jancsika
Are there examples of these "data thugs" causing quality scientific work to be
unduly criticized?

For example, good faith research that cannot be replicated is part of science.
Are there researchers who have done such good faith research (no p-hacking, no
egregious statistical errors) who now have to endure harassment and low-
quality criticism because of the work of the "data thugs?"

~~~
mmirate
"Bad faith" research is almost certainly non-replicable, but non-replicable
research isn't necessarily in "bad faith".

But that doesn't matter, because non-replicable research is still non-
replicable research.

If you study a nonextant phenomenon enough times, then you can produce a null-
hypothesis rejection anyway; and if you study a nonextent phenomenon once,
then that one study has a chance (albeit minute) of rejecting the null
hypothesis anyway. The first is bad faith, the second is either bad luck or
incompetence, but neither is an accurate representation of whatever was being
studied.

> good faith research that cannot be replicated is part of science.

To science's detriment, yes.

~~~
nonbel
>"But that doesn't matter, because non-replicable research is still non-
replicable research"

Exactly, the people who say direct replications are unnecessary are the ones
fostering an environment for fraud. To the scientist it doesn't matter much
why you couldn't describe what you did well enough for other people to repeat
it. Maybe you made it up, maybe it depended upon the (unmeasured) magnetic
field in your room, whatever.

------
iron0013
This is not pyschology's replication crisis. Every field has replication
problems, psychology is just one of the only fields taking it seriously.

~~~
acover
Source?

I don't hear about rampant replication problems in chemistry/physics.
[https://en.wikipedia.org/wiki/Replication_crisis](https://en.wikipedia.org/wiki/Replication_crisis)

~~~
schuetze
The replicability of the biomedical sciences is just as bad if not worse than
in psychology.

[1]
[http://www.slate.com/articles/health_and_science/future_tens...](http://www.slate.com/articles/health_and_science/future_tense/2016/04/biomedicine_facing_a_worse_replication_crisis_than_the_one_plaguing_psychology.html)

[2] [https://www.nature.com/news/1-500-scientists-lift-the-lid-
on...](https://www.nature.com/news/1-500-scientists-lift-the-lid-on-
reproducibility-1.19970)

~~~
acover
Yes, that is covered in my wikipedia link and a comment below.

There is a replication crisis in many fields. Not every field.

~~~
thaumasiotes
There is a replication crisis in every field that is not disciplined by a need
to achieve externally-defined goals. Chemistry is in good shape because people
apply it in order to e.g. create steel.

The difference between a scientist and an engineer is that the scientist
answers the question "if I do X, what will happen?", and the engineer answers
the question "what do I do in order to get Y to happen?". But if there are no
engineers working off of the results, the scientist is free to say anything.

Sadly, as pointed out elsewhere in the subthread, this is a sufficient but not
a necessary condition. Medicine has plenty of well-defined goals, but it
doesn't know how to meet them. The cheap wisdom there is: if you want
something badly enough, there will always be someone willing to sell it to
you, whether or not they actually have it.

------
bnchrch
There's a lot of discussion here on why Psychology has this problem more so
than fields outside the social science realm.

I think we're all missing what is fundamentally flawed about academic
psychology; and it's not their methodology.

In North America (perhaps elsewhere) you are required to have at least a
Master's degree to practise Psychology and you should have a doctorate if you
want any mobility with your practise.

This leads people who have no interest in academia having to find a way to
convince people they've discovered something new and novel so that they can go
apply what has already been discovered.

It's no surprise many of this studies can't be replicated! They were designed
from the beginning to lead to a significant findings so that someone could
write their dissertation, get their doctorate and get the hell out of there.

And you know what? I do not blame them in the slightest. Academia is a
nightmare and it's holding a whole profession hostage.

~~~
rhizome
Heck, _computer science_ has the same problem, and probably will as long as
psychology, since like every person, every business is different. Replication
difficulties follow naturally.

~~~
godelski
A difference though, despite the name, computer scientists aren't calling
themselves scientists (I mean what science do _most_ of us do?). Calling
yourself a scientist comes with a large amount of responsibility and
accountability.

~~~
peoplewrong
who is us? academic computer scientists?

------
ISL
Those attempting to make a science more reproducible are not behind the
crisis, they are part of the solution.

~~~
bassman9000
_" It’s a hard enough life to be a scientist," she says. "If we want our best
and brightest to be scientists, this is not the way to do it."_

 _And he’s a skeptic of this new generation of skeptics. For starters, Nisbett
doesn’t think direct replications are efficient or sensible; instead he favors
so-called conceptual replication, which is more or less taking someone else’s
interesting result and putting your own spin on it._

It's gotta be about feelings, then.

~~~
nonbel
Yep, his idea is for psychologists to never actually double check each other's
work. Hes probably scared of what would be found.

Edit:

And of course " _conceptual_ replications" should also be done. Its just that
they are not a replacement for _direct_ replications.

~~~
mcguire
How often are _direct replications_ done in other sciences?

~~~
nonbel
Direct replications are done in science 100% of the time something becomes
fact. Eg, see my other post here:
[https://news.ycombinator.com/item?id=17981869](https://news.ycombinator.com/item?id=17981869)

Btw, I dont consider either psychology nor medical research to be a science at
this point. I just call them "research", which seems neutral enough to me.

------
peoplewrong
disagreement and people being wildly wildly wrong in good faith [and other
people pointing it out!] is an essential part of the scientific process. it is
difficult because academics often closely associate themselves with the status
of their scientific contributions.

No one really likes killing your heroes but sometimes it must be done.

If psychology claims to be a science it MUST collectively accept this. However
we must be sure to deal kindly with the people behind the ideas.

~~~
gboudrias
> If psychology claims to be a science it MUST collectively accept this

I'm an undergrad in psychology. To my dismay, there is no consensus that
psychology should be a science. You still hear claims that humans are too
great to be measured, etc. That we are more than material, and therefore can
never be studied objectively.

Of course, I disagree with this (everything real can be measured in some way),
but the problem runs way deeper. It is an epistemic disaster, where most
students do not try to learn the first thing about epistemology. Without
exaggeration, some psychologists just want to tell nice stories. I remain
baffled. I'm hoping time solves the issue, because I sure don't have a
solution.

~~~
ble
I'm of the opinion that the complexity of possible human behavior and
phenomena is too great to allow for some experiments to be replicated and
controlled.

There are some scientific facts about humans that you can establish because
you can do a replicated, controlled experiment and there are others that you
can't, for ethical (no one should allow this experiment to be done) or
material reasons (this experiment is very easy to conduct assuming you have
multiple copies of the planet earth).

~~~
rocqua
> I'm of the opinion that the complexity of possible human behavior and
> phenomena is too great to allow for some experiments to be replicated and
> controlled.

If these elements of human behavior cannot be confirmed by replicable
experiments, what chance do we have of knowing about it. Claims about such
behavior are nothing more than stories, for they are not based on evidence.

> There are some scientific facts about humans that you can establish because
> you can do a replicated, controlled experiment and there are others that you
> can't

There is no such thing as a 'scientific fact' that cannot be established by a
replicated controlled experiment. The stated dichotomy is really important
though, and one that it seems psychology has partially failed to make. For it
lost its reputation by presenting 'stories' as scientific fact.

That is not to say there is no value in studying human behavior that is beyond
science, but we need to realize that we cannot treat the result of this as
'scientifically true'. Instead, it is something like 'intuitively true based
on anecdotal experience'.

