
Why I retracted my Nature paper: correcting the scientific record - tokenadult
http://retractionwatch.wordpress.com/2013/06/19/why-i-retracted-my-nature-paper-a-guest-post-from-david-vaux-about-correcting-the-scientific-record/
======
epistasis
There are many well documented cases showing that both Nature and Science are
particularly susceptible to these types of shenanigans. I don't know if this
is unique to them because of their particular competitive spirit, or if
distaste in examination extends to most journals and it's just that Nature and
Science see far more scrutiny than other journals.

These high profile journals serve the purpose of publicizing discoveries that
are thought to have far-reaching interest in science. However, at the same
time, they have the strictest limitations on publications, meaning that the
methods section, the foundation of science, are rarely described in any
detail. So they have some of the worst practices for publication while
supposedly chronicling the most important discoveries.

So while Nature/Science papers make the news and make people's careers, it's
important to downweight the certainty of the finding until there is
independent verification.

~~~
akiselev
From the horse's mouth: only 6 out of 53 "landmark" oncology studies were
significantly reproducible [1]. Bayer did a similar study, found that they
couldn't reproduce the results of about 2/3s of the papers they tried to use
for drug discovery R&D [2].

This is a near universal consequence of academia's incentives and extends into
many of the sciences (although biology is most susceptible due to its
complexity).

[1]
[http://www.nature.com/nature/journal/v483/n7391/full/483531a...](http://www.nature.com/nature/journal/v483/n7391/full/483531a.html)

[2]
[http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.htm...](http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html)

~~~
gizmo686
>This is a near universal consequence of academia's incentives and extends
into many of the sciences (although biology is most susceptible due to its
complexity).

It is not simply a matter of incentives (although they play a part). Another
issue is pure publication bias. If you get 'landmark' results, a paper is much
more likely to publish then if you got the results everyone would expect. In
many ways this is one of the necessary functions of the journals, as it allows
people to look at the new stuff without sifting through many articles of
expected stuff, but it does create a bias.

Add to that the incentives of needing to get published, and constrained
resources, and a willingness to publish what could be sub-par methodologys (of
course they are par), and it is clearly not the best system.

------
rrrrtttt
Science publishing is a classical case of the Market for Lemons [1]. The
authors are like used car salesman in that they know so much more about the
results in their paper than their readers and referees. Even if just a
minority of authors abuses this, the value of publishing a paper plummets and
honest authors don't bother any longer with the whole process. The well has
been poisoned.

[1]
[http://en.wikipedia.org/wiki/The_Market_for_Lemons](http://en.wikipedia.org/wiki/The_Market_for_Lemons)

------
a3_nm
What I find the most surprising in this story, without having to take sides
about the merits of any of the discussed papers, is that the initial
refutation of the work of Bellgrau et al. was reviewed by Bellgrau et al.

If this fact is true, then there is a clear conflict of interest: Bellgrau et
al. should have declared it and declined to review, or, barring this, the
editors should have noticed it and ignored the review.

~~~
ISL
If a paper is being contradicted by another, it's quite common to send a copy
for review to the original authors. Who better to look for holes in a new
paper than someone who's expert in the field _and_ about to have a big problem
on their hands?

The editors are the ones who make the go/no-go decision. In my field, I've
seen papers that are faulty and have been rejected by experts be accepted by
major journals. There's a lot that goes into the go/no-go decision by an
editor; I don't envy that job.

If the article is correct, then the editors at Nature may have made the wrong
decision. Most of us are sufficiently un-expert in the field to make any
conclusion.

~~~
jessriedel
I believe in this case, where two outside reviewers recommended acceptance and
the original authors recommended rejection, that the appropriate action for
the editors was to get the two outsider reviewer's opinion on the original
author's report. Even better, a third outside reviewer should be contacted to
assess the original author's report.

~~~
ISL
I'd even call for greater numbers of referees. There are big practical
problems with doing so, but the uncertainty associated with the traditional
two or three referees is >50% ( sqrt(3)/3 ). An ensemble of 7-8 referees may
produce a more reliable result.

~~~
jessriedel
Refereeing is already a huge time sink. I don't believe doubling the referee
burden is a possibility without a major incentive changes.

~~~
ISL
Agreed completely. Doing it right takes a lot of time.

------
crocowhile
For those who are not familiar with the topic: retracting a news and views was
like giving the journal a (quite deserved) middle finger.

------
wallio
Fascinating issue and the rest of the Retraction Watch website is eye opening!

I rarely read academic journals anymore but I had always assumed the top
journals did a good job of filtering out bad papers and quickly retracting
those that were later disproven. Like every human activity it is not so
simple.

[http://retractionwatch.wordpress.com/](http://retractionwatch.wordpress.com/)

Edit: Warning this is blog addictive - I just spent a solid hour reading. It's
kinda science gossip but still important.

------
tokenadult
Now that we have online publishing, and all scientists around the world appear
to have email and Web access, we can do much better. Jelte Wicherts is a young
professor of psychology who would like to clean up peer review and scientific
publication practices in that discipline and other disciplines. In Frontiers
of Computational Neuroscience (an open-access journal) he and his co-authors
give a set of general suggestions

Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting
the daylight in: reviewing the reviewers and other ways to maximize
transparency in science. Front. Comput. Neurosci., 03 April 2012 doi:
10.3389/fncom.2012.00020

[http://www.frontiersin.org/Computational_Neuroscience/10.338...](http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00020/full)

on how to make the peer-review process in scientific publishing more reliable.

"With the emergence of online publishing, opportunities to maximize
transparency of scientific research have grown considerably. However, these
possibilities are still only marginally used. We argue for the implementation
of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and
(3) online data publication. First, peer-reviewed peer review entails a
community-wide review system in which reviews are published online and rated
by peers. This ensures accountability of reviewers, thereby increasing
academic quality of reviews. Second, reviewers who write many highly regarded
reviews may move to higher editorial positions. Third, online publication of
data ensures the possibility of independent verification of inferential claims
in published papers. This counters statistical errors and overly positive
reporting of statistical results. We illustrate the benefits of these
strategies by discussing an example in which the classical publication system
has gone awry, namely controversial IQ research. We argue that this case would
have likely been avoided using more transparent publication practices. We
argue that the proposed system leads to better reviews, meritocratic editorial
hierarchies, and a higher degree of replicability of statistical analyses."

The practices recommended in Wicherts's article would have made an incident
like the earlier retraction from Nature reported here much less likely to be
necessary in the first place.

------
lolgay5
Absolutely shameful. Nature editors should step down and issue a public
apology immediately.

~~~
cantankerous
I'd tend to agree if we had some 3rd party verification of the inability to
reproduce the results of the initial study. Right now it's somebody vs
somebody else.

~~~
prospero
The article mentions several independent failures to reproduce.

~~~
dllthomas
Further, "We think it's reproducible" is an absurd defense. If it's
reproducible, point to a reproduction. If it hasn't been reproduced, you don't
have any clue if it's reproducible.

------
danbmil99
Did I read correctly, that the authors of the original paper were allowed to
review the rebuttal article, and even worse, their single negative review
caused the paper to not be published?

Isn't there some concept of recusal, conflict of interest in allowing a
researcher whose results are being challenged to review the challenge, and
veto its acceptance? That's absurd on its face.

------
spartango
Like virtually every other collection of people, the biomedical sciences have
their own drama and politics.

It is unfortunate that these occasionally impede the progress of science, and
downstream, the eventual treatment of patients. Even so, much good work is
done and will continue to be done into the future, despite politics.

------
ezy
These journals like to say they need the exclusivity, prestige and money to
support their editorial process. Yet, given the exclusivity, money and
prestige, they apparently don't do a great job.

On the other hand, is there a clear alternative? Is there a way to organize
the review process such that the power of the editor is reduced, yet the
quality is maintained? I'm not sure.

It seems like this would require non-anonymous open review and a some kind of
rating/comment system for papers. It's possible that what ends up happening is
that it creates more work for the reader in the same way that it takes effort
to tease out a products true worth on Amazon. That might not be a bad thing at
all.

~~~
scientist
The alternative is open post-publication peer review: any scientist that reads
a paper in her/his field of expertise may act as a reviewer and publish a
rating or review of that paper.

The end result does not create more work for the reader, as something like the
average of ratings received by a paper gives in many cases more information
than just the name of the journal where it has been published. For example,
this Nature paper currently has a rating of 2% on Epistemio, although it has
been published in the prestigious Nature:
[http://www.epistemio.com/p/Jr3gS9VW](http://www.epistemio.com/p/Jr3gS9VW) .

The journal name is a too weak information as compared to individual ratings
of papers, as the distribution of the quality of papers from a given journal
is very skewed.

See also the papers in
[http://www.frontiersin.org/Computational_Neuroscience/resear...](http://www.frontiersin.org/Computational_Neuroscience/researchtopics/Beyond_open_access_visions_for/137)

~~~
jessriedel
This will never happen without some incentive for scientist to review papers.

------
Gravityloss
While it is dramatic, these kinds of issues are often used by people who want
to deny any scientific subject. They can claim that all research is worthless.

~~~
glenra
Doesn't that make it even _more_ important that research journals should be
transparent and follow their own policies? If top journals were willing to
admit mistakes promptly it would be much harder to claim that valid
alternative views are being suppressed for political (or other) reasons.

~~~
Gravityloss
Yes.

------
driverdan
> it showed that expression of CD95L on Sertoli cells in allogeneic mismatched
> testes tissue transplanted under the kidney capsule was able to induce
> apoptosis of invading cytotoxic T cells

I imagine the way reading this made me feel is similar to how an average
person feels reading about complex computer science issues.

~~~
refurb
Basically it's just saying that if you take a particular genetic sequence and
introduce it into foreign cells, you can put them into a body where you'd
expect an immune response (t cells), but in fact you actually see those cells
die instead (apoptosis).

It's too bad it didn't turn out to be true as it would have been an incredible
advancement in transplantation.

