
Why publish science in peer-reviewed journals?  - ig1
http://www.genomesunzipped.org/2011/07/why-publish-science-in-peer-reviewed-journals.php
======
vacri
I have one single solitary paper that has been published in a peer reviewed
journal, so my experience is present but limited.

What I gained out of the peer review process was an article that had grey
areas filled in and statistical methods applied better. The quality of the
reviewed article was much better than the quality of the submitted article.
The reviewers were reading the article with an even-handed critical eye; they
weren't just looking for support for their pet opinions.

There's also an aspect of criticism - criticism is expected from reviewers,
but random dude A on the internet is frequently going to get something along
the lines of "who are you to criticise my methods", or any request for more
info declined. "Please provide more info" will work if you need to get a paper
published, but won't work if you have _already_ published your paper on scribd
- where's the motivation to do rework (beyond honour of course, but scientists
are human and that does not always apply)

And then there's the point that popularity doesn't equal 'correct'. Here on HN
I've seen time and time again that a throwaway _popular_ comment gets upvotes,
and a well-reasoned but _unpopular_ comment gets downvotes. Peer-reviewed
articles should be about scientific rigor, not visceral feeling. Consensus of
the masses is orthogonal to the needs of scientific rigor.

The main article describes very, very little in the way of quality assurance.

~~~
Estragon
Yes, the popularity of useless, emotive comments here really bugs me. There is
an inverse correlation between the points I get for a comment and the amount
of time and expertise I put into it.

In the scientific case, though, "popularity" would be measured by the extent
of citation. (Really, it already is.) That is a considerably higher bar than
"I think I'll click on that arrow." For a paper to be cited, it has to make
enough of an impact that people remember it and its ideas shape their
thinking.

------
wickedchicken
I see a pretty significant issue with "one-click voting" for a scientific
paper. Reading papers is often a tough exercise and one that involves
rereading, thinking, and rereading again. How many times have you read a
_dense_ webpage, reread it over the course of a week, then clicked the upvote
on HN? Never. You glance at the webpage, think "that's neat" and click away.

This isn't to say that the journal-of-reviewers model is perfect, but I am
wary of casually tossing it away in favor of "upvotes." Upvotes have massive
semantic ambiguity: does it mean the article is accurate? Does it mean the
article is funny? Is it maybe wrong, but important to think about? Does it
merely match the reader's interest? How do you express something such as "we
believe your underlying methodology is correct but aren't certain your
analyses are rigorous enough to prevent X." A downvote?

~~~
copper
A better model would be something like what the Real-World Haskell book used
when it was under development: allow comments (anonymous or otherwise) on each
paragraph, and refine the content based on those comments. Of course, this
does have the twin problems that it doesn't keep results private, and makes
credit assignment a _real_ pain :)

Even with all the problems it has, I actually prefer peer-review right now.
Some of the stuff that gets self-published on arXiv is of sketchy quality, and
there's just no quick way of figuring that out by just reading the abstract if
a 'known' person hasn't co-authored it.

~~~
_delirium
In physics at least, a substantial proportion of peer-review goes via arXiv
first anyway, so it's not really an either-or. In many cases, the real peer-
review happens when people see a new paper in their area on arXiv, they send
in comments or challenges, etc. to the authors, and in the best case, by the
time a journal article is submitted for formal peer review, many of the
relevant peers have already reviewed and helped improve it.

Some mathematicians have also gotten pretty active in using blogs and wikis
for peer review and even actively doing research. I don't expect journals to
go away, but I wouldn't be too surprised if there are some areas of the
sciences where they become more of a formal archival service: where papers go
to be filed on shelves once the real action/review/dissemination has already
happened.

~~~
copper
fwiw, that's true for cs and stat, and I assume for math too. What I refer to
is something like this:

[http://arxiv.org/find/all/1/AND+AND+ti:+riemann+ti:+hypothes...](http://arxiv.org/find/all/1/AND+AND+ti:+riemann+ti:+hypothesis+ti:+proof/0/1/0/all/0/1)

This is a particularly egregious example; but if I'm looking for something on
a different subject where I don't know some of the notable researchers, or
don't know what I'm looking for, it would take me a lot more time than I'd
like to spend to filter out the more interesting papers.

------
bendmorris
While I'm in favor of opening up journals and possibly putting out some
nonprofit alternatives, and while I recognize that the peer review process is
definitely not without its problems, be careful not to discount it completely.
I cringe when I hear phrases like "value to a community" and "collective
opinion" used to refer to scientific articles. The average reader is not
qualified to judge the merits of an article, hence the peer review process.
Some great examples: the age of the earth, evolution, global climate change;
three areas where public opinion is divided but experts overwhelmingly agree.
The value of science is based on its accuracy (which is best determined by
experts) not its appeal to the masses.

~~~
hotpockets
I was assuming that the article was suggesting a system that is similar to
reddit, but with important differences. For instance, requiring real names and
public voting records. Or, you could reduce a person's vote importance if they
are voting on an article outside their area of expertise. There are probably
other things you could do to.

~~~
rflrob
Even cooler would be to have each person (explicitly or implicitly) set other
people's vote importance. I can imagine a complicated system wherein voting
cliques are detected, and used to feed back into the rankings of what papers
show up on your page.

Ideally, the voting could be more than just "+1/-1" as well, potentially with
a couple different axes, so that I could, for instance, say that I think a
paper is important to the field, but likely wrong. Just because a paper is
controversial doesn't mean I don't want it to be discussed (for instance, if
it introduces a new computational technique, but the input data is flawed
somehow).

~~~
wickedchicken
You're matching an engineering solution to a social problem. Trying to map a
paper's social significance to an abstract number is _really hard_ , and in
the process you lose a lot of ability to present papers based on taste.
Showing me a paper because "these numbers are high and you tend to like these
kinds of numbers" is fundamentally different than "you should see this because
it is groundbreaking." On the other hand, if we can create a metric for
elegance and another for "brings the reader closer to enlightenment," then I'm
all for it.

------
flocial
The article doesn't touch on the fact that all this scholarship, usually
funded in some way with taxes, is locked up behind pay walls unless you want
to pay $10-$40 per article (that's probably a bigger obstacle). However, ArXiv
works because at least with harder sciences you have proofs and empirical
results that are more clearly defined. It would be harder to judge papers in
economics, humanities, etc. (there's already a lot of variance in quality of
scholarship even in the current published model, I don't see how it would
flourish under an open model aside from becoming more like the blogs we read
here - no offense).

There have been many experiments where people have passed on jargon-laden
papers more fiction than research off to "respected" journals. If you can
solve the quality/trust problem and make it acceptable for governments and
other funding bodies, then we might move forward.

The thing about the current model is that they distribute pdfs of all things
when I think the raw text, bibliography files, and even data should be
distributed with the article and available at no cost.

~~~
starwed
_The article doesn't touch on the fact that all this scholarship, usually
funded in some way with taxes, is locked up behind pay walls unless you want
to pay $10-$40 per article (that's probably a bigger obstacle). However, ArXiv
works because at least with harder sciences you have proofs and empirical
results that are more clearly defined._

Most of the stuff on arxiv is _also_ published in peer reviewed journals.
That's why they are called preprints.

~~~
flocial
But not necessarily. There are some landmark papers like Grigori Perelman's
solution of the Poincaré Conjecture that only exist on ArXiv.

------
a3_nm
My experience (in computer science, ymmv) is that a vast majority of
researchers would agree that, with the Internet, the profits made by journals
are excessive relative to what they do (ie. what they actually do, not what
they outsource to other researchers).

The reason why there are still there isn't because we're convinced that we
need them (as the article implied), but because of inertia. This is the
criterion by which most researchers judge each other and themselves, and it
survives because few people have the guts to refuse it: everyone knows it's
evil, but it's hard to walk away. People are lured by prestige, and by the
tangible consequences of prestige (like funding).

We do not need a killer app to get rid of the journals, we need a good number
of motivated and prominent researchers to opt out and bootstrap something
else.

------
cmonsen
Not to mention that publication bias leads to incorrect results. See
[http://www.theatlantic.com/magazine/archive/2010/11/lies-
dam...](http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-
and-medical-science/8269/) or the PLoS paper:
[http://www.plosmedicine.org/article/info:doi/10.1371/journal...](http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0050201).
I am most hopeful for Google Scholar or Mendeley (mendeley.org) to open this
up, but who knows?

------
rnadna
Yesterday I got an email from a journal asking me to submit a paper. They
needed it within 2 weeks. And it would be published 2 weeks thereafter. After
I spewed coffee through my nose, laughing, I clicked the "spam" button.

Publication in real journals is slow; reviewers are asked to send comments
within a month, but they often take more than that. Publication in real
journals is difficult; the best-ranked journals reject 90 percent of
submissions. Publication in real journals is flexible; complicated decisions
about how to transform unpublishable into publishable are made in each step
along a long, often meandering and complicated, path.

The simple answer to the question posed ("why publish in peer-reviewed
journals") is that, for many fields, not doing so ends a career.

But if the question is, as it seems by comments here and in many conversations
I've had with students, "what value do peer-reviewed journals have" the answer
is that, for many fields, doing so improves the science for the individual
paper, and for the larger corpus.

------
thret
The issues he discusses seem to be addressed by phygg, essentially a
combination of arvix and digg for physics: <http://www.phygg.com/phygg/>

I see no reason why this shouldn't work for arvix as a whole.

~~~
chalst
Most recent "paper" on Phygg, fram July 2nd, begins _Red Bull Hats: Wholesale
from factory, limited time! Get MLB Hats, NFL Hats, New Era Hats for your
favorite teams_. It's gathered net upvotes. Most papers there have been
automatically cross-posted from Arxiv.

I don't think we can take Phygg to be a proven model. There's a wider issue of
how new gatekeepers can be established for such a new model.

------
jgamman
and no mention of mendely? <http://www.mendeley.com/>

the pain point is that people are focused on the publication - you need to
focus on the people. i don't care what you think about a paper, but i want to
know what Feynman has published and who he keeps an eye on. professional
scientists know who to keep an eye on - the rest is noise. science is the
oldest social network there is...

------
BasDirks
" _peer review is costly (in terms of time and money)_ "

This is an objection why? __Of course science costs time and money. __

 _"However, journals do perform a service of sorts: they filter papers,
ostensibly based on interest to a community, and also ostensibly ensure that
the details of an analysis are correct and clear enough to be replicated
(though in practice, I doubt these filters work as intended)."_

Nobody cares about your doubts. Back it up.

~~~
anghyflawn
>Of course science costs time and money.

One problem is that peer review is essentially free for the journals, and
people have to take time for it out of the other things they have to do (like
teaching, or scrambling to get Yet Another Publication before the tenure
review comes up, or doing the reporting to 550 external agencies). People are
expected to uphold the highest standard of rigour essentially in their free
time. In that sense, the current peer review leads to waste, by decreasing the
efficiency of the investments originally made into other things.

------
eru
Using this system, you can still gather a bunch of web-published articles,
print them out together, and call the results "Nature". Just in case you miss
the current system.

------
nolite
It gets you funding

~~~
danieldk
This. I guess at least a fraction of scientists are open towards a change in
reviewing. However, during funding applications and interviews, the number of
publications in journals is _very_ important. Hence, if you want to make any
chance in science, you submit to journals.

------
NY_Entrepreneur
The author's ideas are wacko and won't work.

I've published two technical papers in peer-reviewed journals where I was the
only author, published several more papers in peer-reviewed journals where I
had co-authors, and wrote a Ph.D. dissertation that was reviewed as "an
original contribution to knowledge worthy of publication".

The papers I published were reviewed essentially on the usual criteria of
"new, correct, and significant".

Here is one place the author gets way off track: For a technical research
paper, e.g., new, correct, and significant, reading it is difficult, and
judging if it is new, correct, and significant is more difficult. In an
experimental science, judging if the work is reproducible adds more
difficulty. It's TOUGH work.

The work is so tough that for some huge majority of papers that are published,
nearly the only careful reading at least for some years will be during the
review process. Maybe later the paper will get referenced, carefully read by
many people, etc.

So, the idea of the article of this thread that commonly for papers that just
appear on-line there can be 'votes' from many competent readings is just
uninformed, misinformed, ignorant, just plain wrong, brain-dead nonsense.

Here's more: Now, when an author A sends a paper to journal X, there is a good
chance that the paper is actually in or close to the specialty S of journal X.
So, the reviewers of journal X have a good chance of being among the best
people to review the paper. If journal X does publish the paper, then author
A, the paper, and people interested in specialty S get a lot of help because
journal X has said that the paper is new, correct, and significant for
specialty S.

STILL, even with all this help for specialty S, the paper will likely be
carefully and competently read at least for some years essentially only by the
reviewers of journal X.

Since it is so difficult even with the help of journal X in specialty S to get
such papers read, for a paper just posted on-line there is no hope at all.

Maybe in some experimental sciences reading a paper is relatively easy;
however, judging if the experimental work is reproducible promises to be
difficult.

Maybe not in experimental science but in essentially all of the rest, there is
a fundamental difficulty about research that is new, correct, and significant:
Almost necessarily nearly no one will understand the paper easily or
understand the paper at all. Else, given basic academic competitiveness, the
results in the paper would long since have been old instead of new. So, the
research and review processes have to work always right at the edge of what
can currently be understood at all. For meeting this fundamental difficulty,
the present journal system helps, and just putting papers on the Internet is
hopeless.

~~~
chalst
Did you read the Richard Smith article he linked to?

<http://breast-cancer-research.com/content/12/S4/S13>

~~~
NY_Entrepreneur
Okay, I just read

<http://breast-cancer-research.com/content/12/S4/S13>

Yup, again, his ideas are wacko and won't work.

Again, the main problem with just putting the papers on-line and letting
people 'review' them then is that mostly the papers won't get read nearly as
carefully as with the present peer-review process; or, even if some paper does
get read carefully, other readers mostly won't have any very good way to know
this.

Uh, publish several well regarded papers in specialty S, and then maybe get
invited to review some papers in specialty S. Do well reviewing and impress
some journals in specialty S, and maybe get invited to be an editor of a
journal in specialty S. Do well as an editor, and when Elsevier, etc., starts
a new journal in specialty S, maybe get invited to the the editor in chief.
That takes ballpark 20 years. Net, the present paper review system has a
severe 'promotion' mechanism that is an enormous aid to quality the journal
readers can value. That is, the readers have a good way to know about the
quality of the reviewing. With the proposal of this thread, even if a paper
got a careful review, the readers would not know this. That is, the proposal
throws out the baby with the bathwater.

As I described, reading research papers and judging if they are new, correct,
and significant is TOUGH work. The present process makes a good effort at this
tough work, and his proposal omits any very serious effort at such work.

With irony, eventually his proposal would lead to a system of 'upvotes' from
'respected reviewers'! Or, "I don't understand the paper, but it was upvoted
by Professor Issac Iatrogenic at Sawbones Medical School."!

He still wants voting, but he has no well designed 'voting system', and the
present peer-review process does. If he wants a better system, then okay, but
his proposal is for essentially no system, and that would be much worse.

For his many arguments, mostly they omit a big, huge point: He's considering
the problems with the papers that were accepted but is not considering the no
doubt on average much more severe problems with the huge number of papers that
were rejected. His proposal to 'publish' just by posting on the Internet would
result in all the rejected papers also being posted and, then, of course,
soon, still more junk papers.

Why buy Tide laundry detergent? Why drink Coke? Are we afraid that both Tide
and Coke might give us something toxic? No. Of course not. Why? Because Tide
and Coke have been used by many millions of people and are from companies with
a huge financial fortune to lose if their quality falls. So, we 'trust' Tide
and Coke. So, Tide and Coke are for us valuable 'brands', "valuable" because
we can trust them because we know how much evidence there is that the products
are at least okay.

So, similarly, the peer-reviewed journals provide researchers with such
valuable 'brand names' they can trust better than just some paper posted on
the Internet.

Net, the author of

<http://breast-cancer-research.com/content/12/S4/S13>

just doesn't 'get it'. He's looking for something he wants, doesn't get as
much as he wants, and proposes a system that is still worse.

Let's take his point about publication delays. That doesn't matter very much!
Once the paper is submitted, it's no longer a secret, and mostly relevant
researchers don't actually have to wait for the printed version two years
later to make use of the paper.

Let's take his point about the 'cost' of reviewing: Well the reviewers LIKE to
review because it helps them keep up in their fields and, thus, is good for
their careers! The effort to review is not a 'cost of the peer-review system'
but just routine work by the reviewers for their own careers. In a hot field,
reviewing the papers can be an advantage in time; the reviewer gets the papers
maybe two years before someone just looking at published papers.

Let's take his point about reviews of papers after they are published: Sure,
over time, including after publication, the reputation of a paper can change.
He wants to use this effect to replace the initial peer-review process. But he
ignores that he is talking about papers that have been published and, thus,
have passed peer-review, that is, are already 'certified' as being on the
inside of the fence. If just publish papers on the Internet, then that first
step of being 'certified' will be missing and the paper might just languish
much longer before anything like the after publication review process he likes
now.

Or, he's missing the big point about how to filter water: First, just before
the pick up of the pump, have a screen that filters out the rocks down to the
size of sand. Second, have a fiber filter that filters out all the solids down
to about 10 microns. Maybe next deal with the usual suspects of compounds of
calcium, iron, and manganese. Next deal with bacteria. Next maybe use reverse
osmosis or distillation to deal with everything else. Likely now have some
good water to drink. Well, paper reviewing also has 'stages', and one of the
early ones, but not the last one, but one that helps the last ones, is peer-
review. E.g., we don't ask the reverse osmosis to filter out the gravel!

His point about novel, original, or innovative work being rejected is poorly
considered: We're not talking popularity of fashion frocks here. Instead,
novel, etc., or not, a paper still has to be new, correct, and significant.
Believe me, a solid proof that P = NP will not be rejected because it is too
'novel'! It will be accepted because the work is, again, new, correct, and
significant. It is 'novel' mostly because it is especially significant. Again,
to get published, pass new, correct, and significant. If after that the paper
is also wildly 'novel', so be it; it still got published!

He missed the big, huge point about focus on research specialties: As I
described, if paper from author A goes to journal X in specialty S, then
likely already the paper is in the right set or an appropriate set of editors,
reviewers, and researchers. If the paper is well regarded, then the people in
specialty S learn about it fairly quickly. So journal X serves as an
'accumulation point' for specialty S. If just publish papers on the Internet,
then lose the value of such 'sets' and 'accumulation points'.

So, indeed, if his proposal were to become accepted, an early step would be to
have, for each specialty S, a special place to post papers in specialty S.

Then to keep down the noise level, that special place for specialty S would
have a 'moderator'. A good 'moderator' would be the editor in chief of journal
X. Soon he would get too busy and would want some editors, say, from journal
X. The editors would get too busy and soon would want some reviewers, say,
from journal X. Then all concerned would want some quality control for their
special place for specialty S. So, the moderator would call himself the editor
in chief again and insist that his editors insist that their reviewers
actually do good peer-reviews. Maybe the whole Internet site would be
sponsored by Elsevier. Then they would charge for the papers that passed the
reviews. Now except for tree cutting, we would be back where we are now.

If he wants to improve the quality in publishing, then I have a suggestion for
him: Do some brilliant research work, write it up as a paper, and submit it!

~~~
chalst
Richard Smith does not propose a voting system, but the complete elimination
of gatekeepers. It's worth emphasising that he is talking about biomed
exclusively: other disciplines have different scholarly cultures. Most
crucially, the commercial impact of biomed papers is higher than in other
fields.

When you write

>reading research papers and judging if they are new, correct, and significant
is TOUGH work.

I agree, but when you write:

>The present process makes a good effort at this tough work

I am surprised you say this. Smith points to studies that show that referees'
opinions have very little correlation, which suggests the minimum level of
filtering actively done by referees is very low. It does not follow from this,
though, that the institution of journals considered as a whole does not
perform a useful filtering function, which is to say that I agree with you
about your point about successive filters.

>Publication delays Many journals follow anonymous reviewing and forbid
distribution of preprints.

The other point, which he does not make, is that referees have very little
incentive, besides the opinion of the editorial board, to do their job
properly.

>His point about novel, original, or innovative work being rejected is poorly
considered. ... If after that the paper is also wildly 'novel', so be it; it
still got published!

There's no doubt in that case that the journal has been an impediment to
science.

I don't advocate abandoning peer review. I do think we should be more open-
eyed about the problems with it. It is a costly system, and one that does not
work the way it is intended. Science needs gatekeepers, as you argue, although
they need not resemble the current journal system.

~~~
NY_Entrepreneur
On Smith's article and a voting system, uh, I read Smith's article ASAP! I was
assuming, without careful checking, that Smith's article was much like that of
this thread. Looks like you got me in some too fast reading!

Yes, it looks like these articles are for biomedical work. I'm sure it's
different in many small respects and maybe some larger ones.

Uh, I said that the present system makes a good "effort", and you countered
with some of the problems with the "results"! You may be right!

The "correlation" is not very impressive: The review process is not quite that
simple. Here's a dark, dirty little secret: Often at some point in the
toughest part of the paper, some reviewer thinks, "I tried to look up the
background for that, and I don't have the prerequisites even for the
background. I can't take out a year to study to be ready to read this paper.
The parts of the paper I can read look rock solid. The new parts I can read
look nice. His references are high quality. His writing is carefully done. He
doesn't make wild claims. It looks like a solid piece of work. I can't find
anything wrong with it. Let him publish it: If it has a big error, then maybe
that will come out and be on him. I'll give him a pass and go to dinner." or
some such. So in this case correlation doesn't mean much!

"Many journals follow anonymous reviewing and forbid distribution of
preprints." Sure. In most simple respects, the paper is still all locked up
until it appears in print maybe two years after submission. Still work goes
on! So, the author of the paper can present parts of the work in lectures and
seminars. Guys down the hall know and can start working on extensions. The
author can work on extensions and give talks and reference the paper as "in
submission". Generally the word gets around! Net, the publication delay
doesn't much slow the actual research work. That is, the actual exploitation
of the work in the paper doesn't really start just with the publication and
people reading that, Instead, 'the word gets around'.

"The other point, which he does not make, is that referees have very little
incentive, besides the opinion of the editorial board, to do their job
properly." The referees have a LOT of "incentive ... to do their work
properly" if they want progress in their academic careers! Good referees get
invited to be editors; good editors get invited to be good editors in chief. A
referee, editor, or editor in chief is a 'gatekeeper' and, thus, a somewhat
powerful guy. If the journal is good, the names are up in lights. So,
professional reputation is enhanced. Professional connections are made. In
terms of HN, there are connections in a professional social graph. A referee,
editor, etc. gets to know who the up and comers are in their research field.
They get early notice of new research results and, then, can start working on
extensions. If they do a bad job, then their editor will soon not send them
anymore papers to review!

"There's no doubt in that case that the journal has been an impediment to
science." I don't see that. Einstein published general relativity. People are
still trying to understand it, test it, and understand the implications. So,
at the time of publication, just how novel it was was not at all clear. But it
got published and passed the first step in the filter. Slowly people
understood that the paper was very important and worked hard on it. The
publication process didn't end the work on the paper but did its job. I can't
believe that many of the reviewers actually understood the Riemannian
geometry. Still the paper got published. If the paper was actually nonsense,
then that would be on Uncle Al.

I believe that the current system is okay. The people objecting just want the
system to be much more than it is; they may be asking for too much. One way to
improve is to quit chopping down trees, but that change is likely on the way.
But replacing the current system by putting papers on the Internet and letting
nearly anyone 'vote' would be much worse.

