
Nature rejects double-blind peer review - how corrupt is academia? - andreyf
http://science-community.sciam.com/blog-entry/Sciam-Observations/Nature-Editors-Reject-Peer-Review/300009482
======
tptacek
I've reviewed a bunch of submissions to ACM Transactions on Networking, all
they were all double-blind.

Usenix, on the other hand, is is "single-blind" (if you want to dignify this
with a name).

It makes a huge difference. Forget about gender bias. The real problem is bias
against newcomers. I was reluctant to write anything negative about well-known
researchers. When you're doing a bunch of these, you don't have a whole hell
of a lot of time to track down cites and evaluate experimental results. So
your default is going to be, people who could kick your ass in your field are
getting good reviews.

~~~
danteembermage
The trouble with double blind is that in many disciplines the number of people
publishing is quite small, and then when you get into sub disciplines you get
a very small number. If you've shopped a paper around at conferences and
workshops before submitting it, it can be fairly unlikely that a person
qualified to review the manuscript doesn't know who wrote a paper or couldn't
find out with a phone call.

So effectively you get single blind (if that, it is often pretty obvious who
your referee was if you know the writing style or the referee keeps citing
papers by the same author) . However, even though it often doesn't work very
well in practice, not doing it doesn't work with perfect certainty.

~~~
tptacek
Bear in mind that established researchers aren't going to take a "superficial
No" for an answer, and maybe the reason pubs stay single-blind is that double-
blind would invite havoc; Raj Jain would be getting articles turned down, and
everything would bog down in appeals.

Oh, wait. No he wouldn't. ACM SIGCOMM is double-blind. Yeah, I have no idea
why everything doesn't work that way.

------
timr
The title of this post is excessively melodramatic.

It isn't necessarily "corrupt" to reject double-blind peer review. In my
experience, most scientists who are qualified to review a paper will be able
to infer the name of the researcher behind it, based solely on the content.

This is particularly true in areas where software packages or novel algorithms
come into play, because people tend to name their ideas for maximum
recognition. It's silly to hide the name of the researcher when the paper
refers to their pet algorithm a dozen times by name.

~~~
brlittle
I agree that the post head is melodramatic. But I think perhaps you missed
something in the article. As it pointed out, the kind of "easy guesses" you
mention are far less likely to occur in broad fields, even if they are more
likely in narrower fields. Besides, so long as the default "fail" state in a
double-blind review is at least _no worse than_ the current "succeed" state of
single-blind, I don't see why there would be an objection.

I confess that I've done a reasonable amount of graduate work, and I work at a
well-known college (staff, not faculty), and I was genuinely surprised to
learn that most reviews aren't double-blind. It seems to me to make more sense
as the default option.

But again, I don't disagree that the submitter is hyping the situation with
that head. The situation isn't necessarily "corrupt" -- but I can understand
the inclination to interpret it that way.

------
brent
Interesting title. You indict the entire community of scientific researchers
based on a single decision made by a single journal. Is your decision making
process any better than theirs?

~~~
andreyf
Touché, I'm sure there are many researchers who work hard and mean well.

But this isn't to say that the system of publication is perfect. Why don't we
innovate more in how we organize research on a macro level?

~~~
ced
That's not what he said. There's a world from "the publication system could be
improved" to suggesting that academia is corrupt.

------
andreyf
Similarly, an interesting review of a book critical of the status-quo in
academia (The Access Principle, by John Willinsky):

<http://www.scottaaronson.com/writings/journal.html>

It seems that journals (and textbook publishers) give the authors of
intellectual works prestige and bragging rights in return for billions of
dollars worth of intellectual property. The more valuable the intellectual
property you give up, the more bragging rights you get. The entire industry
reminds me of Tom Sawyer's whitewashing trick:

 _Tom said to himself that it was not such a hollow world, after all. He had
discovered a great law of human action, without knowing it – namely, that in
order to make a man or a boy covet a thing, it is only necessary to make the
thing difficult to attain._
<http://www.pbs.org/marktwain/learnmore/writings_tom.html>

Or is there something I not getting?

------
DanielBMarkham
I'm concerned that the way we fund and review science has some big problems. I
don't want to get into a 30,000 word essay here, but some sciences are
starting to sound like echo-chambers where there is a narrative and research
is supposed to support that narrative. Double-blind peer reviews should help
with that, or at least try to help. But I remain skeptical that anything
serious is going to be done until science completes politicizing itself
somewhere in the next 2-3 decades.

~~~
brlittle
Nice observation..and the situation is, I think, exacerbated by the often-
cited "publish or perish" nature of things. I strongly suspect that this
increases the tendency to adhere to the narrative, making it even more self-
reinforcing.

Dangerous.

------
ltratt
There are many ways in which academia is non-transparent, but I don't think
one journal rejecting double-blind reviewing is a major one. In the areas of
computing in which I work not only can I guess who's written a paper (I would
estimate at least 75% of the time off the top of my head, and with a little
help from Google I imagine I could get well over 90%), but I can often guess
who's written a review of my papers (perhaps 10-20% of the time). Computing
might be an odd example - it's all I know. But I suspect that even most
"broad" subjects end up breaking down into sub-areas where the participants
tend to know each other pretty well.

Double-blind seems to me to be an attempt to try and cover up the much deeper
flaws of "peer review". Peer review doesn't in practice mean that a peer-
reviewed article is 100% correct, and that legions of scientists have honed
every sentence. It means it got past the prejudices of a few reviewers, and
(unless the reviewer is unusually committed) means that it didn't contain
anything that screamed "obviously wrong" on a quick read through. When I get a
genuinely good review (in particular if it finds a flaw in my work), I find it
immensely useful, but these are sadly in the minority. IMHO the greatest
reason why new-comers to the field find it hard to get papers accepted is
because their ideas have a higher chance of grating against those held dear by
the established people who act as reviewers; this is such a fundamental flaw
in the system that fiddling with double-blind barely even counts as fiddling
at the edges.

Given all this, it greatly amuses me when in North America to see products
advertised as being based on peer reviewed science.

------
pixcavator
The idea of double blind review goes against the recent (almost as old as the
internet actually) trend of posting everything online long before publication.

------
amohr
Joao Magueijo has an interesting look into this problem (specifically
referring to Nature, too) in his book _Faster than the Speed of Light._ I read
it when I was studying physics and it really intrigued me to see how difficult
it is to challenge the status quo in a science that is defined by its
fundamental revolutions.

------
mercurio
Not very.

