
We tried to publish a replication of a Science paper in Science - hhs
https://slate.com/technology/2019/06/science-replication-conservatives-liberals-reacting-to-threats.html
======
mrb
The Associate Editor of Science who rejected this paper when it was submitted
happens to be my wife's friend. They have been having long discussions about
this issue. My understanding is that although it's a valid study, it just
doesn't bring anything new to the table that is worthy of publication. This
paper refutes a 2008 paper that has already been refuted. So the Associate
Editor decided there is no reason why this new refutation deserves to be
published in Science. That's all.

I don't think there is any case of scientific malpractice here. Keep in mind a
high-profile journal like Science is extremely competitive and receives many
paper submissions. They _have_ to filter what they choose to publish based on
some reason. And it seems to me their reason is perfectly valid.

Now this is turning into a bigger dispute because the researchers started
arguing with the Associate Editor on Twitter (I heard the verb "harass".) So
he blocked them on Twitter. And this is making them more upset. Etc.

~~~
blue_devil
This paper refutes a 2008 paper that has already been refuted.

Could you cite?

~~~
mrb
I don't know, sorry. I'm just repeating what I heard from my wife. I will ask
her.

~~~
rjf72
I think this would be strong evidence for _why_ this should be published, even
if some no-name journal happened to publish a refutation at some unknown time
in the past. I just searched for the paper 'Political Attitudes Vary with
Physiological Traits' on Google Scholar and found a large number of papers
citing the paper in question, and searches on regular search engines turn up
an even more vast number of mass media references to the paper. Yet I'm unable
to find any reference to any replications of such including with keyword
tuning.

Whatever replication come refutation there may have been seems to have eluded
not only society, but also scientists at large. I do believe both her and
Science that the study was refuted at some point, but if nobody knows this and
people keep assuming the study is meaningful - then this refutation is
completely meaningless. Science as a journal brings the eyeballs and thus has,
in my opinion, an obligation to publish data indicating that extremely
influential studies they have previously published are likely flawed.

------
crazygringo
Interesting. The fundamental question is: do journals have an ethical
obligation to publish credible results which refute previously published
results in the journal?

The authors argue yes. I'm not sure myself -- what is a refutation vs. a
different perspective/interpretation or other addition and where do you draw a
line?

But it's intriguing to think if this would be a meaningful force for better
results -- and could be an incentive to replicate findings you think might not
hold up, just from being in a prestigious journal.

~~~
Misdicorl
No. This belongs in a journal dedicated to the field of study, not a general
issue journal like Science.

The authors are silly to think otherwise. It is also not surprising that
Science published a flashy article without a lot of oomph behind it. It's what
they do.

~~~
pdonis
So a journal can get a free pass on an obvious violation of scientific ethics
just by saying that they're a general issue journal?

The field of study has nothing to do with it. If a journal publishes a result,
they are ethically obligated to publish further work that refutes that result.

~~~
dash2
“If a journal publishes a result, they are ethically obligated to publish
further work that refutes that result.”

Too general. Few in science would accept this. The claim here is the weaker
one that journals are ethically obliged to accept failed replication studies -
and even that is non-obvious to me. (Should Science also be obliged to accept
_successful_ replications? No? So it should accept papers on the basis solely
of the results? Hello, publication bias.)

~~~
Hasknewbie
Nice attempt at reframing there, but if this new paper is correct then it's
not a 'failed replication', it's a refutation. Big difference.

------
sgustard
"Science is facing a 'reproducibility crisis' where more than two-thirds of
researchers have tried and failed to reproduce another scientist's
experiments, research suggests."

[https://www.bbc.com/news/science-
environment-39054778](https://www.bbc.com/news/science-environment-39054778)

As such, this would seem to be a common occurrence and Science would have to
decide whether to devote equal attention to every failed reproduction.

~~~
Veedrac
> this would seem to be a common occurrence and Science would have to decide
> whether to devote equal attention to every failed reproduction

I think the answer to this is a rather blatant “of course!”

If Science has the time to support false studies that retrograde the
scientific cause, it certainly has the time to publish their corrections and
refutations. The scientific process is not about how much stuff you can
publish, it's about how much _true information_ you can convey. A journal half
the size of Science but twice as trustworthy is at least as valuable, if not
vastly more.

~~~
modriano
> I think the answer to this is a rather blatant “of course!”

It's not as blatant as you think. It takes experienced scientists to review
submitted papers, and their time is an expensive resource. Is it interesting
that a research team with a sloppy adherence to a methodology failed to
replicate a study? Not really. Should Science give resources and a platform to
people who are just careless researchers? Obviously not. It's easy to fail to
replicate an experiment; just do a bad job.

~~~
Sacho
There's an implication in your argument that the science posted in the journal
is of a good enough quality to be published - otherwise why wouldn't you apply
the same standard you are applying here? Doing a bad job isn't limited to just
reproduction.

Considering the reproducibility crisis, is it your position that people are
worse at reproducing research than they are at conducting it? An alternative
interpretation would be that people have more incentive to publish hastily-
made "failed to reproduce" papers than they have "findings" papers, but I
don't think that's the case either, especially considering the reply from
Science to this case.

~~~
brighter2morrow
I think the rationale for Science to say they already fulfilled their
obligations the first time around is that they thoroughly scrutinized the
methodology used the first time around. This is asking them to now scrutinize
two methodologies used and for less payoff than the first time.

------
Gatsky
The original paper was an n=46 study, and they have regression equations there
with 5 coefficients. It has been known for a long time that you can't
successfully adjust for confounding variables with such small data. The fact
it didn't reproduce is easily predicted and hardly noteworthy. Everybody in
the field is aware of this issue. This article is just a bunch of academics
tooting their own horn. The Journal's response to their request was 'Who
cares?', and this is the right answer.

~~~
dagenix
They published it. Clearly everyone didn't know that was nonsense or they
knowingly published garbage because it would make for nice press and now they
refuse to retract it. Either its shameful behavior in the past, or shameful
behavior now, or maybe both.

If my friend asks me what the score in lasts night's game was, and I tell him
4-3, but then later realize that it was actually 4-2, I'll send him a txt to
correct the mistake. If your argument is that a lower standard applies to one
of the premier scientific journals, I think that's pretty messed up.

~~~
Gatsky
There is a reproducibility crisis in psychological research. The main reason
for this is that the experimental methods they use are not robust (although
they might be rigorous). A second reason for this crisis is that the things
they are measuring are often hard to actually define (eg is there a definition
of 'conservative views' that we can all agree on, and does any such
categorization in an experiment really mirror external reality?).

In reality what happened with the first paper is the editors of the journal
sent it out for review, and whoever reviewed it was happy that it met the
methodological standards considered sufficient at the time. This is not the
same as being happy that it was true. You might consider that 'messed up' (and
you are right, it is messed up), but that was how the sausage was made in
psychology research in 2008. There is no galactic council that imposes
standards of evidence in a journal. They rely on the reviewers, and the
reviewers are active researchers in the field.

As an aside, publication in a top journal is not a guarantee that results are
water tight, and should never be considered as such. If you want to believe a
result, you need to read the paper and think about it carefully. Even then,
this approach is susceptible to outright fraud, which also does happen.

~~~
dagenix
If we have a reproducibility crisis in psychological research, it seems to me
like one of the best things we can do to address it is to incentivize people
to try reproduce other group's findings. It's not sexy and it won't earn
headlines, but it is important. The absolute best way to kill such an effort
is for a journal that published a bad study to refuse to even consider
publishing a study that refutes it.

~~~
Gatsky
You’ve misdiagnosed the problem. The best thing we can do is incentivize
research psychologists to produce robust research in the first place. This is
unfortunately at odds with publishing lots of papers, making money off pop-
psych books and giving TED talks.

------
gota
Replication might suggest they obtained the same results.

A better title for laypeople would be "We _refuted_ a Science paper that can't
be replicated and Science won't publish it"

~~~
scoofy
Except that's not how science works. Publishing a paper with contradictory
result is not a refutation. It's a data point.

~~~
gota
I'm not educated in how it works for this kind of (sociological?
Psychological?) experiment but finding repeated instances - as per the trials
reported in the article - seems like more than just a data point. They have
many more negative data points than the original paper had positive ones

In pure formal logic they have at least proved it is not true for all cases,
so it's a refutation in that sense at least

But in any case, "refutation" is closer in popular lingo to what happened than
"replication"

~~~
scoofy
This gets into deep philosophical questions. Logic can be deductive or
inductive. Your use of formal logic implies deductive logic, but empirical
science is inductive.

With induction, one can only have negative knowledge (this is know as the
problem of induction or the black swan problem), but it gets more complicated
than that. Most studies are trying to look at probabilistic outcomes and are
trying to get significant p-values. There is a myriad of problems with this
beyond simply the existing probability that their sample was bias. E.g., their
methodology could be improper, or even just less than ideal. There could even
be unknown problems with the underlying assumptions and the model being used
by researches.

To suggest that an opposite result _refutes_ a previously obtained result
deeply mis-characterizes how science and human understanding works, and the
problems inherent in this thought process deeply overvalues the knowledge we
have and undervalues the more-than-likely seriously uncertain data we work
with every day.

If you're interested in these problems, i'd suggest Karl Popper's work on
philosophy of science.

~~~
gota
Thanks for your comment, I see it can help frame the conversation, but I meant
to say I'm not versed in the terminology for social sciences. I'm about to
finish my Doctorate heavily involving formal logic and logic programming, and
have read a sufficient bit about the philosophy of science

This is an interesting discussion but I'm unfortunately out of time now. I'd
only point that most of science is abductive, rather than inductive, and that
the whole concept of refutation will take us to discuss what is scientific
_advance_

~~~
your-nanny
best just think in terms of Bayesian update.

~~~
gota
Indeed, abduction is a way to base your priors in Bayesian nets!

------
rjf72
The thing I find most interesting about this is that I think people's
responses here are being driven in significant part by _what_ was refuted. In
particular the bad science was, as is frequently the case, on a political
topic and one that I think affirmed the biases of many. In this very thread
you can already see people literally arguing that the study that refuted it
"was just a data point" which is a direct attempt to try to ignore the far
more substantial data provided by the replication effort in showing that the
original study was unsupported.

Imagine if the paper that was refuted was one that claimed to show that e.g.
humans were not playing as significant a role in global warming as thought.
And then Science refused to publish a study that seemed to show conclusively
that the original study was, at best, deeply flawed. Would people be
responding in the same way?

------
bryanrasmussen
Given that publishing an article costs money publishing an attempted
replication is not likely to happen - if it succeeds it won't happen because
we basically already gave you the information, and if it fails it probably
won't happen because so what - a replication failed, should they publish it,
what if someone comes along and says no we succeeded in our replication should
that be published as well because someone else had a replication fail?

How much of the journal should be given over to replications that fail, and
then the replications that succeed where another failed, and any potential
back and forth?

At any rate a replication study while important for the process of science is
not so important for the process of news and as Science is a business and
using their resources for publishing replications does not seem likely to
increase their profit (but maybe even reduce it) while increasing their cost,
I am surprised anyone would even think it should or would happen.

Perhaps what there should be is a reproducibility column, where studies
attempting to replicate previously published studies are noted. A paragraph
summation could be given. As reproducibility of a finding becomes accepted
further replications would be dropped from the column as redundant.

on edit: grammar fix.

~~~
kerkeslager
If you don't publish replications and failures to replicate, you aren't
publishing science. If a study is worth publishing once, it's worth publishing
enough times to establish its result as conclusive. Otherwise, you haven't
discovered anything, and the whole scientific process has failed, since the
purpose of the scientific process is to discover conclusions.

You're right that in business, there's a disincentive to publish replications,
because business treats science publication as literary nonfiction: they're
trying to publish the most sensational results, because the most sensational
results sell journals. This is one of the strongest arguments that scientific
publication shouldn't be a business: making scientific publication a business
creates incentives for scientific publications to not publish science.

~~~
bryanrasmussen
that's all very well and good, please show how the needs of science to publish
all these studies will correspond with the needs of Science to make money to
continue publishing studies. My assumption is that the need to make money will
beat the need to publish replications.

on edit: because it seems like your argument really is that Science the
business should shut down.

~~~
kerkeslager
I don't care about the needs of Science the business.

Science the business should shut down, not of their own volition, but because
the scientific community shifts away from for-profit publications and Science
has the respectability of a tabloid.

~~~
bryanrasmussen
Great, that's an honest response - an honest argument to be made.

A dishonest response is saying they should publish things that makes no
business sense for them to publish, and the logistics of which really makes no
sense for their business model, with the seeming purpose of upbraiding them
when they fail to do this thing they would never do.

~~~
kerkeslager
You're basically saying that Science (the business) should do whatever makes
the most money, always, and anyone who criticizes them for doing that is
"dishonest".

Science (the business) is being dishonest by pretending to publish science
(the method for obtaining facts). It's not okay just because it makes them
money, and it's not dishonest to upbraid them for their harmful behavior.

------
gallerdude
um, what? It isn’t the obligation of Science to publish any random piece, that
the authors conveniently argue is foundational. Null results are useful, but
not the most useful thing.

“Best Actor nominee argues why his performance is the most impactful,” etc,
etc.

~~~
ianferrel
The original article was published in Science, so this isn't a "random piece".
It's a specific refutation of something they saw fit to publish.

~~~
gallerdude
I’m not sure “refutation” is the right word.

> “We still believe that there is value in exploring how physiological
> reactions and conscious experience shape political attitudes and behavior,
> but after further consideration, we have concluded that any such
> relationships are more complicated than we (and the researchers on the
> Science paper) presumed.“

That sounds more like a clarification or a letter to the editor. I don’t
disagree that nuance is important, but I assume Nature believes they can get
more nuance out of visiting new studies than reinterpreting old ones.

~~~
closed
They failed to replicate the study over multiple attempts, 1 with four times
the sample size.

Using the time reversal heuristic, if we flip the order, so the replication
came before the original, how would you feel about Science publishing a paper
with a positive result, when multiple studies--with larger samples and the
same materials--had not obtained the result?

If you think the original should still be published in Science in that case,
why? Since the original is published and highly visible, it's dangerous that
Science won't correct the highly publicized, unlikely-true result.

~~~
pmiller2
> ...publishing a paper with a positive result, when multiple studies--with
> larger samples and the same materials--had not obtained the result?

That happens _all the time_. Papers with null results tend to either sit in
drawers unpublished, or never get written at all. Nobody knows about the
multiple studies because nobody will publish them.

~~~
roca
And that is a huge problem. Are you arguing that because null results are
rarely published, we should publish them _even less_?

~~~
pmiller2
Where are you possibly getting that from?

~~~
Kim_Bruning
We agree that currently it happens all the time.

Some number of people (including eg. Roca) say that this is a problem - a
"replication crisis" if you will.

Those people say that perhaps papers with null results _should_ be published.
Perhaps -they say- publication of null results could be (one possible) (part
of) the solution.

Your reply sounded like you don't completely agree? Is that true, or did Roca
read you wrong? Could you clarify?

~~~
pmiller2
I agree that null results should be published. I was making the point that
some otherwise reasonable people disagree, for career or other reasons. If I
were a researcher, I probably would not attempt to get a null result
published; I’d work on something else that actually had a chance of going into
my tenure file.

------
scythe
Getting into _Science_ is about their editorial board’s wacky definition of
notability. It has little to do with quality or even impact.

------
raslah
Perhaps it’s the political nature of the study but everyone seems to be
missing the basic flaw in the argument for publishing this reproduction, that
the flaws of the original research ( small sample size, weak correlation, etc
) were published for everyone to see. We’re talking about a science journal
here and the general mindset of the comments seems to be equating it with a
magazine or something. Educated readers, the real audience of this journal,
would’ve read the research ‘against the grain’, if you will, and immediately
saw that there was little or nothing there. This begs the question of why
publish in the first place, to which an answer might be that it was unique
research at the time. The methods pass muster and the data is there to be
scrutinized by the reader. These guys are basically taking advantage of the
obvious and attempting to, as they say themselves, ‘make their careers’.

~~~
jtbayly
But “we” didn’t see the flaws. It has been a high impact study, and people
both inside and outside of academia have been working off the assumption that
it is true.

~~~
raslah
There’s no reason not to have seen the flaws ( chief among them the minuscule
sample size particularly for such a broad conclusion ) if one read the study
critically. This gets at a symptom of our culture’s lack of understanding of
how research works. That’s why one of the most powerful marketing techniques
today is to merely state: ‘reasearch shows xyz’ with only a footnote referring
to the underlying studies that no one will read, and then those that do will
do so with specific predjudice rather than general skepticism. Those who work
under the assumption you mention are making the same mistake of reading the
headline and taking it as knowledge, usually because it supports some bias of
theirs or some point they’d like to make one way or the other, just like those
who are unnerved by the journal not publishing the replication study.

------
vinay_ys
What's the modern peer review process for science? Is there an open forum
filled with reputed researchers who are willing to write their reviews, ask
questions or give suggestions to young researchers who are submitting their
work for review in a well-formed format? And is there a role for other
bystanders who are observers out of curiosity and may be inspired to become
scientific researchers or reviewers themselves? In this age of Internet, isn't
this kind of online collaborative open forum going to scale much better?

Why are these age old print journals still in charge of this process with
print era processes??

------
SubiculumCode
In my opinion, the most reliable science comes from mid-tier journals. The
highest impact journals are mixed with the absolute best OR the absolute
hyped.

~~~
SubiculumCode
as well as top sub-discipline journals, which often have an impact factor of
around 4-5, but within the sub-discipline e.g., developmental psychology,
these journals has both respect and influence

------
repolfx
I'm somehow both surprised at the comments here and also not surprised. A
remarkably large number of people posting find no problem whatsoever in a
supposedly top journal claiming "the field has moved on" from a major and
famous study that turns out to be wrong. Heck, I'm not a psychologist and I've
heard of this study!

All sorts of important questions are raised by this:

\- Should we trust anything published by Science or other top journals at all?

\- Is the lackadaisical attitude related to the subject matter? I've read that
studies casting conservatives in a good light get replicated almost
immediately, and psychology is completely dominated by liberal voters (>90%).
Which leads to:

\- Why did it take over ten years for such a widely cited study to be
replicated? Nobody cared because it fed their own political biases?

And finally:

\- What do we do about it? Can academia reform itself at all or is that a lost
cause? If so perhaps psychology isn't really worth studying, given the large
expense involved and the fact that the outputs so often seem to be noise.

~~~
Kim_Bruning
As an (old/ex-ish) wikipedian, I often hear people say "Wikipedia is
unreliable"; which is indubitably 100% technically correct.

But the statement is highly misleading, because what are you comparing it
with?

Put another way: There's no such thing as a 100% reliable source for anything.
You need to apply judgment in all things.

'Science' is actually a pretty decent journal. It didn't get its reputation
for nothing. This doesn't mean that you can just assume they are a magical
source of all truth; just like you can't assume 'Wikipedia' is (_definitely_
not), or 'Nature', or 'Britannica' or etc.

\- Should we trust anything published by Science at all? Only to a degree. One
must always be skeptical and think for oneself, no matter what the source.

\- Why did it take ten years for the study to be replicated? Maybe it didn't.
Perhaps these are just the first authors to speak up about it.

\- What can we do about it? Continue working to improve the process, as we
have done over the past centuries or millennia. The authors of this particular
paper spoke up. That's part of the process too.

~~~
repolfx
Whilst I see your point, the problem here is not "people want Science to be a
source of truth" but rather "people want Science (and other journals) to
strive for the truth", which is subtly different. A part of striving for truth
is being interested in cases where you were wrong as well as right. Science
has given the impression here that they don't care if astonishing claims turn
out to be false. This is the sort of behaviour associated with tabloids. Is
Science an entertainment magazine or is it striving for truth? The standards
of behaviour expected are different.

------
bjourne
On one hand their research is important and should be published. On the other
hand, do they have a right to demand it to be published in the most
prestigious journal in the world? High-profile journals reject good articles
all the time for various reasons. Especially "the field has moved on" is a
grating reason for rejection but is used all the time.

~~~
dwheeler
In general they don't have a right to demand it. However, in this case it is a
direct refutation of a previous article that the journal published. At the
very least their refutation deserves to be evaluated by a peer review process
and considered for publication. In fact, I will go further, if a paper
directly refutes older work in a journal, then that journal probably DOES have
an obligation to publish it as long as it is done at least as well as the
original.

The claim that the field has moved on is complete garbage and nonsense. It is
farcical on its face. The older work is still being quoted.

~~~
bjourne
Maybe you are right, but Science isn't an outlier here. All journals are
loathe to publish direct refutations of previously published articles.
Conference papers are even worse in that regard - zero room for refutations.
"The field has moved on" is probably, second to "The article contains factual
errors" the most common reason for article rejections. This problem is not
limited to Psychology either. It is just as prevalent, if not more so, in
Computer Science.

------
mLuby
TL;DR: they replicated the famous study about liberals and conservatives
reacting differently to threats, finding no difference. Science (a famous
journal) refused to publish the null replication.

>We believe that it is bad policy for journals like Science to publish big,
bold ideas and then leave it to subfield journals to publish replications
showing that those ideas aren’t so accurate after all. Subfield journals are
less visible, meaning the message often fails to reach the broader public.
They are also less authoritative, meaning the failed replication will have
less of an impact on the field if it is not published by Science.

Agreed, but maybe the issue here is journals shouldn't be prestigious. I
understand journals' historical usefulness, but the Internet has made them
obsolete.

~~~
tathougies
> Agreed, but maybe the issue here is journals shouldn't be prestigious. I
> understand journals' historical usefulness, but the Internet has made them
> obsolete.

Prestige and historical usefulness are practically synonyms. You're arguing
that prestige shouldn't command respect, but this is like arguing against
human nature.

"Science" should recognize its importance to the world at large and take extra
care to publish failures of replications of papers that it reports. The more
prestigious the journal, the more carefully they ought to follow this
proscription, especially if they wish to maintain their prestige.

~~~
dmix
Agreed, the whole narrative of the internet democratizing things has been
largely overstated and poorly supported.

So many of the existing social and industry hierarchies were merely replicated
on the Internet. As we see with all the popular print newspapers and magazines
often being equally or more successfully online.

Maybe there were hierarchies for a reason and they were a good thing. Rather
it's about building the better hierarchies rather than replacing them entirely
with some academic idealistic pipedream (ditto for markets in economics).

~~~
Dowwie
Widespread adoption of the internet has dramatically changed the world.
"Democratization" was an early term used to describe the opportunities of the
early internet and it remains ongoing. There have been many great collective
accomplishments in the last 20 years that are indicative of that
democratization.

------
jxramos
> We should continue to have frank discussions about what we’ve learned over
> the course of the replication crisis and what we could be doing about it (a
> conversation that is currently happening on Twitter).

What's this twitter conversation they reference?

------
your-nanny
failed to replicate != refutation. the latter is definitive, as in deductive
logic, bit science is inductive. Retractions of articles that are no longer
convincing in light of new data is the wrong way to go.

------
your-nanny
I applaud authors for doing a laborious attempt at replication. But, this
comes across as sour grapes. They dont make a convincing case for imposing an
obligation on Science to peer review their paper.

------
tehjoker
While I keep my mind open to the idea that science might have new results in
this area, there's a certain degree of ideology that goes into coming up with
these studies in the first place. If this study had come out at a different
point in history, its significance would be less important. However, because
of the breaking down of the political order in the United States, people (and
especially liberals) are scrambling to explain it. Why are they different from
us? How could they believe something different?

While people do have varying anxieties, knowledge, and reactivity to aversive
stimuli, there's a stronger prior that political beliefs are based on a few
different things.

1) Class Position: People with money or whom believe they are likely to ascend
classes trivialize the struggles of others and ascribe problems to individual
character rather than systemic design flaws. If an airplane crashes, do we
blame the pilots?

2) Culture / Milieu: What ideas are floating around? What do people whom you
speak to on a regular basis believe?

3) Personal Experience: Have you had a powerful experience that confirms or
contradicts the broader culture?

4) (often but not always most importantly) What will personally benefit you or
people you care about?

There's a few other things you could throw in there as well, but Marxists
would say to liberals that the reason they are different from you is:

1) They aren't that different from you. You both believe in the integrity of
the capitalist system.

2) Often they are from dominant racial or economic castes.

3) There's an intentional campaign on the part of the elites to divide the
working class against each other, and one way is via scapegoating minorities
and women. Everyone likes to say "No matter how bad it is, at least I'm not
X".

4) There is an intentional campaign waged on the part of elites to tie
American national identity to its military campaigns overseas.

5) There is an intentional campaign on the part of elites to use race and
religion as a wedge issue to divide the people so they do not blame the people
causing their problems.

That's not a scientific explanation, but my strong prior is that there's
variation amongst people, but we're basically all the same and we're mentally
flexible enough to think through adversity and come to our own conclusions.

------
blue_devil
Imagine if refuting a famous study is guaranteed to get you a publication into
a prestigious journal - it's impossible to disentangle the conflict of
interest.

~~~
blue_devil
Edit to my own comment to clarify: while I believe replication should be
encouraged, because statistics!, this might set a precedent that gives
incentives to scientists to replicate high-profile articles AND refute them
since the original journal would be guaranteed to publish them. If you've been
in academia, a publication like this could grant your grant, pardon the pun.

~~~
username90
You don't think there is a problem with it being that easy to convincingly
refute articles in the most prestigious science journals? I think that
encouraging those refutations would be good, both to force the reviewers to do
a better job and as you said encourage opportunists to poke hole in existing
papers. It would be good if a significant amount of scientists built their
entire career around poking hole in bad papers like this.

~~~
blue_devil
Technically, not a refutation because a statistical analysis never refutes a
hypothesis, only supports it with a certain probability or does not support
it. That is, a negative result is inconclusive. No, I do not think "poking
holes" should be incentivised - I believe REPLICATIONS of studies should be
incentivised, because every study = a sample. Statistically, the more samples,
the better able we are to filter through the noise. What you suggest will
encourage scientific sensationalism.

------
SubiculumCode
Perhaps what should happen is that replications should be published beneath
the paper it replicates, as an addendum.

------
kabwj
What did the original paper imply? I didn’t see a clear explanation in TFA. I
suspect that it implies that conservatives are more likely to ignore threats
but I’m not sure.

~~~
SaintGhurka
It was more of the opposite. The study claimed that conservatives had a more
physiological reaction to threats.

------
xkcd1963
misleading title

------
felipelemos
* I just failed to read the next paragraph, and should have finished the article first. Just ignore this.

"We conducted two “conceptual” replications (one in the Netherlands and one in
the U.S.) that _used_ _different_ _images_ to get at the same idea of a
“threat”—for example, a gun pointing at the screen. Our intention in these
first studies was to try the same thing in order to calibrate our new
equipment. But both teams independently failed to find that people’s
physiological reactions to these images correlated with their political
attitudes." Emphasis mine.

Why they didn't used the same images?

~~~
hn_throwaway_99
Continue reading down the article - they did do a study with the exact same
images.

~~~
felipelemos
Just in the next paragraph. I should have finished the article first.

