

The case against peer review - fidgross
http://www.slate.com/id/2116244/

======
bmastenbrook
My experience being on the reviewer side of the process is very limited. As a
frequent reader of papers in CS I'm of the opinion that peer review is
accomplishing very little outside of checking theoretical results with proofs
that can be vetted. I can count on one hand the number of papers I've read in
the past month or two describing an implementation of a technique where the
implementation described was available publicly. In all other cases, I had to
take the authors' words for it. Given the poor handling of data for
publication I've seen in the past, I'm not especially inclined to do that.

I also don't believe that the current conference / journal system for
publication of results works especially well. There's a well-documented bias
against the publication of negative results. It encourages authors to "save
up" interesting work for publication when pieces of it could have been
independently examined much sooner. In combination with "publish or perish"
metrics, it discourages valuable or interesting work that might not ever rise
to the level of a full journal publication. The format is incredibly stifling;
important information is left out to satisfy page limits, code mangled to fit
within columns, and interesting data visualizations not even considered
because of the constraints of the media.

I think it's inevitable that the current common publication system will be
recognized as a relic of the paper era, but I fear that any fix will only make
superficial changes to the system. I think it'd be much more useful to have a
publication system that allows work to be published with full data in a format
that allows evolution in response to open reviews. This obviously wouldn't
work well in areas where protection of patient privacy requires data
obfuscation; I don't have a good answer for that. In all other areas,
especially where the research is funded by public money, full data and code
transparency ought to be key, especially where raw data must be adjusted or
normalized in order to be useful and in situations where bugs in modeling
programs could subtly affect results.

Edit: see also this excellent article from last month's Atlantic:
[http://www.theatlantic.com/magazine/print/2010/11/lies-
damne...](http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-
and-medical-science/8269/)

~~~
ohmygodel
"As a frequent reader of papers in CS I'm of the opinion that peer review is
accomplishing very little outside of checking theoretical results with proofs
that can be vetted."

As a frequent reader and reviewer of CS papers I totally disagree. Peer review
helps narrow a huge incoming flood of new research into a smaller stream of
results that are the most novel, important, and correct. Without explicit
review, your options would be to either waste your time wading through the
mess or use reviews from sources whose advice you consider on an ad hoc basis.
I'm sure there are better systems that take advantage of the Internet, the
wisdom of crowds, data aggregation and mining, etc. The current system of peer
review does adequately perform a major service, though.

~~~
bmastenbrook
Given the huge number of niche conferences and journals out there, is the
filtering that takes place actually useful? I'm not convinced of it. I've
engaged in venue shopping myself. Meanwhile useful, novel work is published in
the open every day and my brain hasn't collapsed from the inrush of poorly
filtered information. I run across junk, but I find published junk too. At
least the former usually has the decency to not waste my time with a bunch of
academic boilerplate language and a page of barely relevant citations.

~~~
ohmygodel
Niche venues cater to specialists in an area. Even in specialized areas there
is more output than an expert can keep up with, and these smaller venues
generally perform valuable filtering as well.

"my brain hasn't collapsed from the inrush of poorly filtered information" Of
course not. However, you are either wasting a ton of time or you are using
some other indirect method to decide what papers are useful to read. I'd love
to hear if you think your personal approach to filtering non-reviewed content
is superior to relying on peer review.

~~~
bmastenbrook
Once again, I'm not convinced the specialists are actually filtering usefully
instead of juat defining the boundaries of their niche.

You've assumed that the non-reviewed content I'm reading is in the form of
papers. It isn't; it's blog posts and code.

------
roadnottaken
Peer review is _not_ "the gold standard of modern science." It is the very
first step in a long process of communal collection and evaluation of data.
Everyone knows it's badly flawed, but it's better than nothing. It's only when
scientific results are reproduced and (more importantly) built-upon that an
experiment or finding becomes widely accepted (and thus "true").

The problem comes in when the media pounces on every wild claim in the
literature and re-states it as fact before the ink is dry. Actually it's not
just the media's fault, a big part of the problem are the press-releases that
are becoming a standard part of publishing in high-profile journals.

EDIT: An analogy: saying that peer review is the gold standard in science is
like saying that _releasing_ open-source software is the gold standard for
security. Just because something is open/reviewed doesn't mean it's been
deeply vetted.

~~~
lotharbot
Right. Peer Review doesn't mean "this paper is correct", it merely means "this
is interesting enough to be worth a look." It's a way of filtering out the
clearly bogus and the trivial/repetitive. True progress starts after peer
review and publication, when other scientists look at the same model, write
their critiques, and do their own experiments.

I'd think this would be old hat to anyone who went to grad school in a
technical field. I know in my department, all students were expected to read
and discuss several peer-reviewed papers each term. Finding methodological
weaknesses, oversimplifications, or outright mistakes was common.

------
jamii
The guys behind arXiv have some interesting thoughts on the subject:

<http://people.ccmr.cornell.edu/~ginsparg/blurb/pg02pr.html>

------
dspeyer
If not peer review then what? Centralized review? Is there any hope that that
won't collapse under its own weight? Public review? That hasn't done all that
well in the Open Source world.

~~~
vacri
Come now. Out of literally thousands of peer-reviewed articles that hit the
journals every month, the author provides one example of a bad paper from 2001
and some vague handwaving at problems in the early '80s.

Clearly this means that the system is a total and utter failure and should be
tossed ASAP.

Leaving the sarcasm behind now, peer review is but one element in the
scientific process - and okay, it failed for the example paper. But another
element - free and frank community feedback (ie: no sacred cows) - killed the
paper, by the author's own admission. It's but one element of a series of
checks and balances.

Besides, although I've only published one paper, the feedback from the peer
reviewers made that paper tighter and more professional - so for my solitary
data point, that element of the process did improve the quality of the
article.

------
mazsa
the case for peer review:
<http://www.mcafee.cc/Papers/PDF/EditorExperiences.pdf>

