
Daniel Kahneman “I placed too much faith in underpowered studies” - skmurphy
https://replicationindex.wordpress.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/#comment-1454
======
icelancer
People are starting to learn that the vast majority of "science" are poorly-
controlled white papers that get accepted and are never looked at again unless
it is by a group of replication-crazed people (or what I like to call "actual
scientists") reviewing conclusions drawn from decades-old papers.

Discouraging replication in the tenure track is a large contributor to this.
"Novelty" is literally written in the "guidelines for authors" sections of
many journals. They want the newest, brightest, most headline-catching
"research" to disseminate. And so do the educational institutions. No wonder
why the incentives are so perverse.

On top of this, most accepted research is allowed to be published without open
access, open data, open peer-review history (how many rounds did it go, what
were the objections, how did the researchers answer them, etc), and with the
aforementioned lack of replication.

It's incredibly frustrating being someone who loves science, works in the
field of science, and is skeptical about the system, which used to be a
prerequisite and is now looked at like luddite behavior.

~~~
meri_dian
The reality is that genuinely interesting, productive and novel results are
hard to produce. Even for teams of highly intelligent researchers. The
scientific industry is built on the assumption that such results are in
greater supply - or more accessible - than they actually are.

~~~
lisper
This is a cultural holdover from the late 19th and earlier 20th century when
novel results were much more plentiful. But now nearly all of the low-lying
scientific fruit has been picked so the scientific endeavor needs to adjust
its expectations. Unfortunately, culture changes more slowly than science.

BTW, the lack of new results is actually a good thing. It means we've got a
lot of stuff figured out.

~~~
jansho
> BTW, the lack of new results is actually a good thing. It means we've got a
> lot of stuff figured out.

Not necessarily. If anything, the more we find out, the more we realise how
much we don't know. Take quantum physics, that stuff slowed down since the
1930s.

It's not a bad thing, the journey's always fun too. But let's not kid
ourselves that we're close to figuring everything out.

~~~
smallnamespace
Personally, I interpret GP as _we have already proven many of the true
hypotheses that can be simply and easily formulated, based on the categories
and concepts that we currently find intuitive_.

And this shouldn't really be a surprise. Our cultures and societies have had
thousands of years of evolution; our human instincts have had at least
hundreds of thousands of years. In that time, both culture and instinct have
picked up many _true_ facts about the world that have helped us survive. A lot
of science has been taking those facts and systematically investigating how
far they hold.

So these both can be true: the more questions we ask, the more questions we
have to answer, but also that many of the 'easy' hypotheses have been mined
out, and any true advance in science will by necessity pull us further and
further from the realm where our basic instincts and intuitions can usefully
guide us.

Quantum mechanics being the prototypical example, of course.

~~~
jansho
> the more questions we ask, the more questions we have to answer, but also
> that many of the 'easy' hypotheses have been mined out.

Like a tree? The main branches are the most visible, and so the most tackled.
The smaller branches are only realised when you're deep enough in it. That's
where our age's advantage comes in, as we have many more qualified 'explorers'
than before, but it also makes it all the more important that the foundational
branches are solid.

But then again it may be closer to _readjusting our lens._ The big questions
(of the natural kind, not innovation like AI) today still pertain what we take
for granted, like quantum mechanics is making us question the nature of
reality again. But much like the discovery of waves once threatened the
credibility of light as particles, I'm optimistic that reconciliation can
happen, and to ascend to another level ... but where can we stop? When can we
say "This is it"?

Just musing..

------
jdoliner
I cringe reading "belief in well-supported scientific conclusions is not
optional."

Belief is always optional, if you want people to believe you have to get them
to opt-in to believing. Telling them they must believe rarely accomplishes
this and is endemic of the type of the scientism that has displaced real
science. Of course, the thing this clown was telling us we didn't have the
option of disbelieving turns out not to be true. I suppose now he's willing to
grant us that option. How exactly does Kahneman expect people to distinguish
"well-supported" science from not "well-supported" science when he himself is
obviously wholly incapable of doing so?

~~~
EnFinlay
Belief should not be optional in the face of well-supported scientific
conclusions. The problem is it's easy to mistake un-supported conclusions for
well-supported conclusions.

~~~
jdoliner
> Belief should not be optional

What does that mean in real terms? What do you propose doing to those people
who take the option of not believing? If the answer is nothing then belief is
indeed optional.

~~~
smhost
disinvited to social events, placed lower on the mate selection hierarchy,
opinions not taken seriously, media makes fun of them, all the normal ways
culture shapes behavior.

------
ChuckMcM
I find the response as interesting as the facts. For me, science is always "as
far as we know..." and for others, they have created anchors based on a
scientific result which makes them angry if that science is later altered.

I think Kahenman's response was spot on and real. And I think by responding
that way he gives meaning and life to the scientific _process_ of constantly
questioning and reviewing and trying to improve.

So much about a person is evident when you challenge their beliefs. There is a
range of responses but I find people who respond with "help me understand how
you arrived at your [conflicting] result" get to better results than people
who respond with "if you believe that [conflicting] result then you must not
understand what I said, here let me explain it again in simpler terms for
you." The latter don't do as well and don't have as much impact.

------
Bucephalus355
I really hope this comment gets found.

Barbara Ehrenreich wrote a book in 2009 called "Bright-Sided" that was
incredibly devastating critique (if you managed to hear about it) about
Positive Psychology. I finished the book a week ago, and I say this as someone
who really likes (and still does) Tony Robbins, but it's scary stuff. The most
horrifying part, although not too unexpected, is a conference near the end
packed with Positive Psychology PhD students chomping at the bit to get the
sweet sweet consulting jobs in the various global cities around the world, and
Martin Seligman, the head of the American Psychology Association (APA), having
to subtly hint that it's all a bubble, and these students realizing they might
have been lied to.

I should also say that she has many concerns with Martin Seligman, another
researcher of somewhat similar success to Kahneman.

In defense of all these people though, with the cuts to research funding over
the years what else could they have done?

A deeper point here is the general failure of Psychology. Karen Horney, a
tremendously talented psychotherapist from the 1930s, argues that true
psychology is more sociology than psychology. In general this focus on the
mind, on the internal, is 1). a very Western style of thinking (specifically
Calvinism who were obsessed with deep introspection) and 2). Has only
accelerated over the years as positive self-help think asks people to retreat
further and further into the mind as a defense to deal with lower incomes,
lower economic mobility, or Facebook-incited jealously.

~~~
wu-ikkyu
You might find the works of Thomas Szasz to be of interest. His viewpoints are
very similar to the ones you have mentioned in regards to psychiatry.

[https://en.m.wikipedia.org/wiki/Thomas_Szasz](https://en.m.wikipedia.org/wiki/Thomas_Szasz)

------
new299
For those who like me didn't know what "priming" is in this context:

"Priming is a theory in which an implicit memory effect in which exposure to
one stimulus (i.e., perceptual pattern) influences the response to another
stimulus. ... For example, NURSE is recognized more quickly following DOCTOR
than following BREAD."

[https://en.wikipedia.org/wiki/Priming_(psychology)](https://en.wikipedia.org/wiki/Priming_\(psychology\))

~~~
bzbarsky
To be clear, the DOCTOR/NURSE example is an example of "semantic priming" as
mentioned near the end of the blog comment. There's decent experimental
evidence for semantic priming.

The "priming" that's mostly being discussed in the comment and blog post it's
replying to, on the other hand, is macro-behavioral priming, for lack of a
better term: priming significantly affecting complex physical or social
behaviors. Things like "seeing old-age-related words makes you walk slower" or
"holding a hot vs cold beverage for a few seconds before an interview
radically changes your opinion of the interviewer". The evidence here is...
well, that's a lot of what the non-replicability fuss is about.

------
skmurphy
Summmary: explains context and background for a comment left by Kahneman
[https://replicationindex.wordpress.com/2017/02/02/reconstruc...](https://replicationindex.wordpress.com/2017/02/02/reconstruction-
of-a-train-wreck-how-priming-research-went-of-the-rails/#comment-1454)

"I accept the basic conclusions of this blog. To be clear, I do so (1) without
expressing an opinion about the statistical techniques it employed and (2)
without stating an opinion about the validity and replicability of the
individual studies I cited.

What the blog gets absolutely right is that I placed too much faith in
underpowered studies. [...] My position when I wrote “Thinking, Fast and Slow”
was that if a large body of evidence published in reputable journals supports
an initially implausible conclusion, then scientific norms require us to
believe that conclusion. Implausibility is not sufficient to justify
disbelief, and belief in well-supported scientific conclusions is not
optional. This position still seems reasonable to me, but the argument only
holds when all relevant results are published."

[Edit/add] Kahneman also outlined an approach to address concerns in 2012 in
an open letter in Nature see
[https://www.nature.com/polopoly_fs/7.6716.1349271308!/suppin...](https://www.nature.com/polopoly_fs/7.6716.1349271308!/suppinfoFile/Kahneman%20Letter.pdf)
[linked from
[https://replicationindex.wordpress.com/2017/02/02/reconstruc...](https://replicationindex.wordpress.com/2017/02/02/reconstruction-
of-a-train-wreck-how-priming-research-went-of-the-rails/)] which was
apparently ignored by the priming researchers.

~~~
rhizome
are you quoting from something?

~~~
skmurphy
yes I put the link first before the quote
[https://replicationindex.wordpress.com/2017/02/02/reconstruc...](https://replicationindex.wordpress.com/2017/02/02/reconstruction-
of-a-train-wreck-how-priming-research-went-of-the-rails/#comment-1454)

------
mherdeg
Just a quick plug for one of my favorite periodicals, the Journal of Articles
in Support of the Null Hypothesis:
[http://www.jasnh.com/](http://www.jasnh.com/) .

If you're worried about the "file drawer problem" JASNH might help you feel a
little better about the future.

Naively one would think that the scientific method -- "guess that X does Y,
try it, see whether it worked" \-- depends as much on people telling each
other what DIDN'T work as what DID. Yet many journals will refuse to publish
negative results (perhaps aside from attempts to replicate significant
results).

JASNH takes the opposite publication bias and gives a refreshing view into
what stuff people are doing that takes up real research effort, seems to be
real work, but didn't show an effect.

I have no illusions that JASNH is, like, a super impactful journal -- it's
online-only and accepts a weird mix of disciplines -- but it's nice to see
that people are trying to do the right thing.

------
rgejman
While I agree with Kahneman that it is easy to fool yourself by looking at
studies with small sample size, the recent focus on reproducing existing
research is, to my mind, misguided. It is extremely difficult to perform
experiments exactly the same, even within one lab, much less between many
different labs. For a good example of this, see this fascinating description
of how several labs tried to standardize the way they handle c. elegans
(worms) to identify compounds which extended the life of the animals:
[http://www.nature.com/news/a-long-journey-to-reproducible-
re...](http://www.nature.com/news/a-long-journey-to-reproducible-
results-1.22478)

While some effort to standardize is important, it also waste a lot of time
setting up a specific set of experimental conditions that may not have much
resemblance to the conditions that obtain in the real world. In my opinion, we
learn much more by taking someone's existing result, thinking through the
consequences and then designing well-powered experiments that probe the
assumptions, mechanisms and applicability of the result. With critical eyes
and diverse systems, we won't fool ourselves.

One more note: if this topic interests you, please read The Structure of
Scientific Revolutions. If you are unfamiliar with the book, I guarantee it
will completely change how you think about science as a human endeavour and
make you much more comfortable with the existence of long periods of time
where science just gets some things wrong.

~~~
Vinnl
> In my opinion, we learn much more by taking someone's existing result,
> thinking through the consequences and then designing well-powered
> experiments that probe the assumptions, mechanisms and applicability of the
> result.

I think that, too, is considered replication by many.

~~~
rgejman
The replication projects I am specifically referring to are similar to eLife's
"Reproducibility Project: Cancer Biology":
[https://elifesciences.org/collections/9b1e83d1/reproducibili...](https://elifesciences.org/collections/9b1e83d1/reproducibility-
project-cancer-biology) [http://www.sciencemag.org/news/2017/01/rigorous-
replication-...](http://www.sciencemag.org/news/2017/01/rigorous-replication-
effort-succeeds-just-two-five-cancer-papers)

------
criddell
> This does not mean that we can trust the published results, but it does
> suggest that some of the published results might be replicable in larger
> replication studies with more power to detect small effects. At the same
> time, the graph shows clear evidence for a selection effect.

So, there might be a priming effect but the studies that Kahneman used don't
necessarily show that? Is that right?

~~~
jerf
"So, there might be a priming effect but the studies that Kahneman used don't
necessarily show that? Is that right?"

Perhaps more importantly, there still might _not_ be.

Part of the problem is that priming isn't a binary thing, but a range. Some
uses of things that could be described as priming are so well established that
even if they are not "science", they are certainly engineering, inasmuch as
marketers successfully use them routinely. On the other hand, studies that
seem to show that if you flash words of negative connotation faster than they
can be consciously read (or possibly even consciously seen), pictures
conforming to stereotypes associated with those words are slightly more
quickly recognized may turn out to be bunk they intuitively are after all.
(Note that I'm not saying they're bunk because our intuition says they are.
But contrary to what seems to be a somewhat popular belief, it _is_ in fact
possible for our intuition to be correct. It's one of those things where you
only ever hear about where it's wrong, precisely because that is in some sense
_news_. It's right quite often, moreso for one trained on existing science.)

Personally I'd say this is one of those cases where the recent Nature proposal
to up the standard of significance from 0.05 to 0.005 would probably have been
helpful.
[https://news.ycombinator.com/item?id=15192610](https://news.ycombinator.com/item?id=15192610)
If implemented it wouldn't solve everything instantly, but it would certainly
raise the bar on this sort of side track being taken.

------
yyyyip
Honestly, the fake Nobel prize for economics really diminishes the credibility
of the Nobel prizes and science in general. I like economics, have studied it
and follow it, but it does not deserve to be placed alongside the hard
sciences where actual irrefutable progress is occuring.

~~~
gideon_b
Nobel prizes are awarded for Chemistry, Literature, Peace, Physics, Medicine,
and Economics.

Cultural progress does not hinge on advances in hard science and the Nobel
prize is wise to understand that.

Failing to recognize the impact of literature, peace and economics on our
society is a failure to understand the entire purpose of the award.

~~~
yyyyip
I think economics itself is very important for society. I just don't think
economists add much to the field.

Peter Nobel, a human rights lawyer and great grandson of Alfred Nobel
explained that "Nobel despised people who cared more about profits than
society's well-being", saying that "There is nothing to indicate that he would
have wanted such a prize", and that the association with the Nobel prizes is
"a PR coup by economists to improve their reputation".

So often I read some bullshit article with a headline touting the author as
being a nobel prize winner, and it is always the economics prize. It debases
the achievements of other nobel prize winners, both in hard science and in
literature/peace.

Even Hayek himself was against the Nobel prize for economics because: "The
Nobel Prize confers on an individual an authority which in economics no man
ought to possess.... This does not matter in the natural sciences. Here the
influence exercised by an individual is chiefly an influence on his fellow
experts; and they will soon cut him down to size if he exceeds his competence.
But the influence of the economist that mainly matters is an influence over
laymen: politicians, journalists, civil servants and the public generally."

~~~
huac
Alfred Nobel was an arms manufacturer, that's literally putting profits about
society's well being.

~~~
aqsalose
And as the story goes, after reading an obituary that argued exactly that
("reports of my death are greatly exaggerated" kind of situation), Nobel
drafted a peculiar testament about how his fortune should be used after his
death. [1]

>The whole of my remaining realizable estate shall be dealt with in the
following way: the capital, invested in safe securities by my executors, shall
constitute a fund, the interest on which shall be annually distributed in the
form of prizes to those who, during the preceding year, shall have conferred
the greatest benefit to mankind. The said interest shall be divided into five
equal parts, which shall be apportioned as follows: one part to the person who
shall have made the most important discovery or invention within the field of
physics; one part to the person who shall have made the most important
chemical discovery or improvement; one part to the person who shall have made
the most important discovery within the domain of physiology or medicine; one
part to the person who shall have produced in the field of literature the most
outstanding work in an ideal direction; and one part to the person who shall
have done the most or the best work for fraternity between nations, for the
abolition or reduction of standing armies and for the holding and promotion of
peace congresses.

(You will also notice that the Economics prize is a later addition, having
nothing to do with Nobel's will. I've always wondered why the Swedes picked
economics as the only field worthy of being added as a prize category despite
not being mentioned in the original testament.)

[1] [https://www.nobelprize.org/alfred_nobel/will/will-
full.html](https://www.nobelprize.org/alfred_nobel/will/will-full.html)

------
tryitnow
Kudos to Kahneman for owning his mistake. That's the right sort of behavior to
model for younger generations of scholars.

I aspire to this level of self-awareness, accountability and integrity.

~~~
whipoodle
Sure. Though maybe he could say it a little louder than just a comment on a
random blog post. Just a thought

~~~
Lon7
To be fair, the "train-wreck" the blog comments on was predicted by Kahneman
himself back in 2012. In fact, as shown in the article, he was personally
urging researchers to question and check if the priming research studies were
actually correct.

But you are right. I still see his book cited all the time in popular media.
He should be much more vocal about what he got wrong.

------
victor106
Whatever you think about DK the fact that someone of his stature would admit
when an error was made not just a small one but something as big as this. That
in and of itself is something worth applauding.

~~~
anitil
It speaks volumes that he'd personally respond like this. It shows how much he
cares about his research as opposed to being 'right'

~~~
ec109685
How is owning up to a mistake something to be specifically applauded?

He is a very distinguished scientist with a huge amount of published work. It
is far easier for him to own up to a mistake than a fledgling scientist just
getting started. He has far less to "lose".

Admitting mistakes should be normal behavior for everyone.

~~~
quiteawhile
>Admitting mistakes should be normal behavior for everyone

Agreed. But it isn't, and applauding it is a way to provide moral incentive to
those who might otherwise stay quiet.

------
forapurpose
Many commenters seem to be relating this post to the Reproducibility Project,
which showed problems with reproducing around 100 psychology studies. It's
important to understand the real conclusions of that project, which is not at
all that 'only a small number of studies are accurate'.

 _Strictly on the basis of significance — a statistical measure of how likely
it is that a result did not occur by chance — 35 of the studies held up, and
62 did not. (Three were excluded because their significance was not clear.)
The overall “effect size,” a measure of the strength of a finding, dropped by
about half across all of the studies. Yet very few of the redone studies
contradicted the original ones; their results were simply weaker._ [0]

Also, as one person pointed out in the same article:

 _Dr. Schwarz, who was not involved in any of the 100 studies that were re-
examined, said that the replication studies themselves were virtually never
evaluated for errors in design or analysis._ [0]

And finally, that's how science is supposed to work. Research isn't a hard
conclusion but an argument in a long debate:[1]

 _But the failure to replicate is not a cause for alarm; in fact, it is a
normal part of how science works._

 _Science is not a body of facts that emerge, like an orderly string of light
bulbs, to illuminate a linear path to universal truth. Rather, science (to
paraphrase Henry Gee, an editor at Nature) is a method to quantify doubt about
a hypothesis, and to find the contexts in which a phenomenon is likely.
Failure to replicate is not a bug; it is a feature. It is what leads us along
the path — the wonderfully twisty path — of scientific discovery._

[0] [https://www.nytimes.com/2015/08/28/science/many-social-
scien...](https://www.nytimes.com/2015/08/28/science/many-social-science-
findings-not-as-strong-as-claimed-study-says.html)

[1] [https://www.nytimes.com/2015/09/01/opinion/psychology-is-
not...](https://www.nytimes.com/2015/09/01/opinion/psychology-is-not-in-
crisis.html)

~~~
pc2g4d
Failure to replicate at that rate was an alarm bell indicating that studies
were less robust and reproducible than previously assumed. It's not entirely
"normal" science, as a high rate of failure in replication can be a symptom of
publication bias and everything else mentioned in "Why Most Published Research
Findings Are False"
[http://journals.plos.org/plosmedicine/article?id=10.1371/jou...](http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124)

~~~
forapurpose
> It's not entirely "normal" science, as a high rate of failure in replication
> ...

What is a high rate? What is a normal rate?

~~~
ulucs
5%, which the p value ensures if it's not hacked (or the papers go through
selection effects)

~~~
forapurpose
That's the statistical theory, but what is the reality? I would be very
surprised if 95% of results are accurate. We would need a study of
reproducibility across other fields - and of course a study that reproduces
the Reproducibility Project.

> it's not hacked (or the papers go through selection effects)

There are many, many reasons, besides 'hacking' or selection effects, for
results that aren't 100% accurate.

------
bambax
> _if a large body of evidence published in reputable journals supports an
> initially implausible conclusion, then scientific norms require us to
> believe that conclusion. Implausibility is not sufficient to justify
> disbelief, and belief in well-supported scientific conclusions is not
> optional. This position still seems reasonable to me – it is why I think
> people should believe in climate change._

Plausibility depends on the level of expertise one has in a given domain.

Ordinary people don't have any expertise or direct experience of, say,
fundamental physics, or astronomy, and so for them (us) no result is either
"plausible" or "implausible". We can believe them, or not, but we can't really
have an opinion on them. (That goes for climate change: you choose to either
trust the experts or not, but you can't tell them that they're wrong if you
yourself are not an expert).

But social sciences are very different, because all humans have a pretty good
understanding of human nature. Studying it is all that we do, all of the time.

And so, in the domain of studying human nature, "plausibility" really means
something, and the opinion of the layman isn't to be ignored, because chances
are said layman is as competent as anyone.

 _In that specific domain_ , results that fly in the face of the common
experience should be received with great skepticism and examined and verified
thoroughly.

~~~
danharaj
> But social sciences are very different, because all humans have a pretty
> good understanding of human nature. Studying it is all that we do, all of
> the time.

Relative to what? Anthropologists most certainly have far more expertise on
human nature than anyone else. Everyone else tends to fall into a rut of
essentialism where they believe that their particular, most obviously
culturally specific, experiences are universal.

~~~
bambax
Not sure that anthropologists study "human nature" the way social sciences do
(or should).

Anthropology is the study of human cultures and customs, which probably have
as much to do with accidents, chance and contingency (history) as with
universal human characteristics.

~~~
danharaj
That's the point. They know what's possible about human nature better than
anyone else. If you don't know what's fully possible, you can't make good
inferences about what's universal. Universality is the denial of possibility.

------
beefman
"I still believe that actions can be primed, sometimes even by stimuli of
which the person is unaware. ... I am still attached to every study that I
cited, and have not unbelieved them..." -Daniel Kahneman

------
felon123
The more interesting question to ask is since there is such strong evidence of
priming, how far should law enforcement agencies go in there own efforts to
prime an individual into committing a crime(a la stings} are there any lines
that shouldn't be crossed?

~~~
mattnewport
You seem to have rather missed the point of the comment, which is that the
evidence for priming turns out not to be strong at all. In fact, it is so weak
that the effect may not actually exist.

~~~
oculusthrift
furthermore, priming has nothing to do with the situation that the person is
mentioning

------
JadeNB
Is the article meant to be a link to Kahneman's comment
[https://replicationindex.wordpress.com/2017/02/02/reconstruc...](https://replicationindex.wordpress.com/2017/02/02/reconstruction-
of-a-train-wreck-how-priming-research-went-of-the-rails/#comment-1454) that
you quote elsewhere
([https://news.ycombinator.com/item?id=15228754](https://news.ycombinator.com/item?id=15228754))?
It seems weird for it to be a link to this article itself.

~~~
dang
We've linked the submission to the relevant URL so hopefully that will be
clearer now.

------
SubiculumCode
I'd like to say that most of cognitive psychology does not suffer the issues
so frequently reported in the news by social-psychology research..but this is
wrong. Too many neuroimaging studies have n<25.

To this affliction, my own research has luckily been relatively immune.
Developmental studies need large Ns to track age-related trajectories, and
most of my graduate research has N's greater than 100...for which I am
grateful to my advisor and NIH.

------
hanoz
I was just about to embark on "Thinking, Fast and Slow" as my next commuting
audio book. Ought I not to bother?

~~~
justin66
> Ought I not to bother?

Does the act of intellectual honesty displayed on that blog, and the
concomitant discovery that the author is a fallible human being, make you more
or less inclined to read a work that was rather widely pronounced to be
brilliant?

------
ghostbrainalpha
As an engineer I don't really understand how the effect of these bad studies
can be such a big deal.

Doesn't science build off other science? When one scientist takes the
conclusion of another study for granted (Example that X drug is effective),
and then tries to build off of it with their own study (testing if A or B
delivery method is more effective), doesn't it become obvious fairly quickly
the second study is based on a bad conclusion of the first study?

And if the author of the second study calls out the author of the first study,
this type of Peer Review could be almost as effective as having more Peer
Reviewed literature or study replication, which is what everyone is asking
for.

~~~
Q6T46nT668w6i3m
> Doesn't science build off other science? When one scientist takes the
> conclusion of another study for granted (Example that X drug is effective),
> and then tries to build off of it with their own study (testing if A or B
> delivery method is more effective), doesn't it become obvious fairly quickly
> the second study is based on a bad conclusion of the first study?

Indeed. In fact, this is extremely common in biology. It’s possible that some
fields need to confront the replication crisis (psychology, for example, is
frequently cited and the subject of the OP), but I really don’t see this in my
field.

~~~
nonbel
What I've seen in bio research is that inconsistent results are always chalked
up to some difference in the methods.

------
DanCarvajal
Thought a little too fast there.

------
hrasyid
Curious, if replicability of science research is so messed up, how come real
technological progress seems to be happening at all? (e.g. devices, drugs)

~~~
claytonjy
Varies widely by field. Fields whose studies have inherently small sample
sizes and low power, like psychology, are more susceptible to poor
conclusions. That's before we even talk about the __culture __surrounding
different fields.

Take a random study from physics and another from social psychology; the
physics study will generally be a much higher-powered study, botching the
stats will have a smaller effect on the conclusion, and the practitioners
themselves are, arguably, better statisticians anyways.

~~~
icelancer
Physics/Math/CS also has the arxiv which is a notorious battleground to test
your ideas in the open (a very good thing).

~~~
Q6T46nT668w6i3m
Biology too: [http://www.biorxiv.org](http://www.biorxiv.org)

------
_Codemonkeyism
All of this can be simply fixed. Either by journals (only publish replicated
work, other papers only publish the abstract) - not likely - or create an
index of tainted papers.

A paper that is tainted because can't be replicated by others, also taints all
papers that quote it. Index can be searched by author name.

People would be much more careful what they quote.

Basically without replication you do not get quoted.

Without being quoted you do not get scientist karma.

------
Aron
Biases in the bias literature? How delicious.

------
davidgerard
So, should we expect LessWrong to be amended with retraction notices across
half the Sequences?

~~~
sambe
Are half the sequences on LessWrong about or dependent on priming?

------
model_m_warrior
Really learning about the empirical evidence and reproducibility in 'science'
has been a real bummer. I was made aware via the Sokal affair and have taken
keen interest since. I'd even go as far to say this is why Trump won.

------
perseusprime11
Btw, this applies to your favorite leadership book. The R index on most
business/leadership/management books will also be very small.

------
wnevets
what is the context of this?

~~~
hannob
Large parts of psychology research of the past couple of decades turned out to
be crap. ("non-reproducible" in scientific terms, but "crap" is close enough.)

Kahneman has defended much of this research in the past. But at least he's
smart enough to recognize his mistakes, which cannot be said about many of his
colleagues.

~~~
kenjackson
Is it limited to priming or is it a bunch of other parts of psychology too?

~~~
drivers99
There were some problems last year reproducing ego depletion despite its
prominence for the last couple decades.[1][2]

Personally, I run across the concept a lot in popular books and articles that
I read (from before the controversy), so it presents a problem for me in
whether to believe them if it's not actually true.

[1]
[http://blogs.discovermagazine.com/neuroskeptic/2016/07/31/en...](http://blogs.discovermagazine.com/neuroskeptic/2016/07/31/end-
of-ego-depletion/#.Wbg5Ouvyu5M)

[2]
[https://en.wikipedia.org/wiki/Ego_depletion#Reproducibility_...](https://en.wikipedia.org/wiki/Ego_depletion#Reproducibility_controversy_and_conflicting_meta_analyses)

------
GedByrne
Does any of this undermine any of Cialdini's content in his 'Pre-suation'
book?

------
tripzilch
I really wanted to start this comment with the HN standard zero-content dick
remark of "Is anyone really surprised ...", because I'm really not surprised.
Except the fair comment would be, is anyone who actually bothered to look up
the references and citations of some of his more bold (but oh so juicy)
claims, really surprised, really?

It's a typical situation of that telephone-whisper game; researchers do a test
they came up with on a group of 21 students. In social psychology this is
considered a reasonable sample size. From where I'm standing, this is a non-
starter for doing research and calling it science. Just don't even bother.
It's better to _not_ know anything than to do it any way and catch a bias (yes
like the disease it is).

Stopping attributing _meaning_ to what is essentially random data is what got
us into this whole modern era of technology in the first place. It seems
pretty clear by now which fields of science took that seriously and which ones
preferred to instead read (meta meta) meta studies and "sit in their office,
bouncing ideas of one another"[0].

But hey, I'm from computational science and we can always generate more data,
which starts at sample sizes of about, oh, 10K or so? (at least)

I always get the idea that these guys just wanted to sell books, filled with
slightly-counterintuitive yet somehow plausible factoids. Add in the veneer of
scientific credibility and you've got a _very_ juicy best-selling combo. For
people who like to feel they are scientific / rational intelligent, make it a
part of their ego--something which I can totally relate to, btw, but I _try_
to be better.

A few things that stood out from Kahneman's comment that reflect his ego (even
though he uses words to sound humble, he can't quite find the courage, if he
could it wouldn't have gotten this far);

I had to look up "file-drawer problem", turns out it is a cutesy euphemism for
"publication bias". Care to guess why he doesn't use that word? He does it
twice, even, so it's definitely not to add variation or flavour to his writing
style. Especially when replacing the two usages of the term in context would
become "severe publication bias" and "substantial publication bias undermines
the two main tools that psychologists use to accumulate evidence", which
sounds really pretty damning, much more so than calling it "file-drawer
problem".

> first paper that Amos Tversky and I published was about the belief in the
> “law of small numbers,” which allows researchers to trust the results of
> underpowered studies with unreasonably small samples.

He likes to coin terms a lot. I know the "Law of Small Numbers" in a
mathematical context where it means something entirely different. So I looked
it up and it turns out to be a euphemism for a "hasty generalization fallacy",
kind of the exact opposite of what he suggests here.

Anyone care to check this citation? What is this magical law that allows
researchers to trust the results of underpowered studies with unreasonably
small samples? It sounds _beyond_ implausible to me. You can statistics your
way around this in circles but really you can also just like, _dismiss_
underpowered studies with unreasonably small samples.

> We also cited Overall (1969) for showing “that the prevalence of studies
> deficient in statistical power is not only wasteful but actually pernicious:
> it results in a large proportion of invalid rejections of the null
> hypothesis among published results.” Our article was written in 1969 and
> published in 1971, but I failed to internalize its message.

Yeah right. The real answer is "because I had books to sell, research grants
to obtain and an ego to maintain". You don't _need_ someone to write a paper
about this to come to this conclusion. Of course it's harmful, how can this
not be obvious? It's also, like, a MAJOR part of social psychology and similar
studies because sample sizes are always stupid small. And Kahneman is an
expert in this field. So apart from that he already KNEW this because he has
common sense, he MUST have already internalized it because he's an expert in
this field and you come across this particular bit of common sense all the
time. Therefore, no, you CHOOSE to ignore that reality (not message because
you were perfectly aware of this before that paper you cited).

> if a large body of evidence published in reputable journals supports an
> initially implausible conclusion, then scientific norms require us to
> believe that conclusion. Implausibility is not sufficient to justify
> disbelief, and belief in well-supported scientific conclusions is not
> optional.

If you come from the point of view of the exact, hard sciences, like physics,
math or computational science, it's kind of hard to see anything wrong with
this statement, on the surface.

But if you know a thing or two about social sciences and the like, the
bullshit that goes on there, you know that almost _everything_ is wrong about
the above statement.

Reputable journals often aren't. And there is nothing, absolutely nothing, in
this world that can _make_ someone believe. You cannot require it. And to say
that "belief" is not optional is almost an oxymoron. Unless you use
brainwashing. Except I'm not sure if "brainwashing" even really works because,
you know, guess what fields conducted the unethical studies into it.

> This position still seems reasonable to me – it is why I think people should
> believe in climate change.

Please don't drag climate science through the same mud as your clusterfuck of
research. The hard numbers and sample sizes they have access to, are so large
you'd soil your pants.

> But the argument only holds when all relevant results are published.

Which you KNEW is not the case, so that's not really an excuse is it.

> I knew, of course, that the results of priming studies were based on small
> samples, that the effect sizes were perhaps implausibly large, and that no
> single study was conclusive on its own.

But I had books to sell, research grants to obtain etc etc

> However, I now understand that my reasoning was flawed and that I should
> have known better.

But I had books to sell, research grants to obtain etc etc

> I knew all I needed to know to moderate my enthusiasm for the surprising and
> elegant findings that I cited, but I did not think it through.

But I had books to sell, research grants to obtain etc etc

> I still believe that actions can be primed, sometimes even by stimuli of
> which the person is unaware.

If he's so convinced that "belief in well-supported scientific conclusions is
not optional", then it also holds when the scientific conclusions say the
opposite, and he should DROP this belief right this instance until proper
evidence is obtained. This is not belief, it's stubbornness.

I mean sure, I would personally say, don't throw out the baby with the bath-
water (because I also don't quite agree with the other one, science can be
flawed like any human endeavour). But he only _just_ went hard-line science on
this idea, in the very same comment, and you should at least be consequent
about it.

I really don't think he's learned any lesson. He led his scientific beliefs be
guided by ego, clouded by the idea that research proved what he wanted to be
true, and he's published books filled with untruths that are out there right
now. Is he going to issue retractions? Errata? Cause you know, lay people are
going to read this years into the future, and believe this crap.

[0] In a book by the Kahneman/Tversky/Taleb trio of juicy pop-psych writers,
they described their research methodology this way. Proudly so, because what
could be better science than such incredibly smart people giving the freedom
to "bounce ideas of one another" ... Sorry this post is not proper science, I
can't recall and properly cite what book it was :-/ I think it was Taleb
talking _about_ his buddies Kahneman and Tversky.

------
tonystubblebine
Pretty interesting commentary on failed papers behind Medium's paywall here:
[https://medium.com/@jhreha/most-psychology-research-is-
bs-73...](https://medium.com/@jhreha/most-psychology-research-is-
bs-73d4793b4dc6)

The focus of the article is on failed replication attempts in Psychology. The
gist:

* The famous failed studies are famous because of characteristics of the media, mainly the media loves quick fixes. "Just doing this one tiny thing once will produce this massive change."

* A common sense reason that small interventions don't produce massive psychological changes: imagine if everyone you knew was changing personalities all the time. It would be chaos.

~~~
ktta
What the hell? Medium is soft-blocking articles now?

~~~
tonystubblebine
Oh, you missed that? Medium went on a whole crusade against ad-supported
content and then built a metered paywall and are sharing that revenue with the
authors.

I think it's good and could tell you a lot more about it because I've written
for their new program and had one of the first publications to join.

~~~
ktta
Interesting. I thought it was more of a Quora like login to see anything
model(but I think Quora has since removed it), which was why I was surprised.

I can't really find a fault with the current model of Quora, but for some
reason $5/month sounds a bit high.

~~~
ktta
correction: "...current model of _Medium_ "

~~~
tonystubblebine
Re: $5.

It'll be sort of interesting to see what it looks like in three years.

As of now, I tell people that if they like my stuff, then the $5 is a bargain.
We're publishing crazy detailed personal development stuff behind the paywall.
Definitely worthwhile.

But if that's not your thing, then it's much less of a guarantee right in this
moment.

In three years though... if they execute on a Netflix for content model...
that $5 could be crazy compelling. There is a lot of great writing that would
be unlocked by just having a reader who already paid rather than trying to
jump through content marketing, SEO and virality hoops.

------
feelin_googley
Latest book by Michael Lewis, "The Undoing Project", is about relationship
between Kahneman and Tversky, and to an extent it discusses their research.
Problem with 1980's Bargh paper and other studies is never mentioned. There is
not even a hint that any of the research Kahneman relied upon was later called
into question. I guess Lewis did not think it was relevant. Not sure why but I
expected this controversy over replication to at least be mentioned.

------
0xbear
And that is why, whenever you hear that "science is settled" on something, you
know for sure you're talking to a politician or/and a charlatan.

------
eli_gottlieb
Bwaahahahahahahaaaaa! Dual-process theorists, _repent!_

~~~
edanm
Does this invalidate dual process? Why?

------
will_be_no_ai
In the face of generalized malfeasance across all domains, peoples faith in
climate change will remain. Surely, that science must be the one that is
correct?

~~~
31reasons
Dr Harvey and Irma are replicating that science for you.

~~~
will_be_no_ai
I wasn't really trying to be that much of a smartass. When you read the parent
article and a couple of others that have been posted recently dealing with
replication problems; it makes you wonder if we have facts to enact sweeping
societal change.

When Bill Nye starts calling for burning of the heretics:

[http://www.washingtontimes.com/news/2016/apr/14/bill-nye-
ope...](http://www.washingtontimes.com/news/2016/apr/14/bill-nye-open-
criminal-charges-jail-time-climate-c/)

I wonder if fact or emotion is guiding us here.

Does a couple of hurricanes prove this? For those with faith it does.
Congrats.

