

The Future of Peer Review - RichardPrice
http://techcrunch.com/2012/02/05/the-future-of-peer-review/

======
apu
I'm all for new publishing models in academia, but this post doesn't address
the fundamental difference between the web in general and academic papers:
expert curation.

The web's "curation" works (to whatever degree it does) because there are
thousands, if not millions of people who can rate the quality of most
articles/essays/websites. But in many scientific fields, there are often only
a very small number of people who would be qualified to review a particular
paper (usually well under a hundred, sometimes as few as a handful).

And make no mistake, this is a numbers game -- there's the well known 100/10/1
rule for content websites (for every 100 viewers, there are 10 who rate
content, and only 1 who creates content). The exact numbers vary quite a bit
from website to website, but the point is that it's _orders of magnitude_
difference. That's all fine if your initial pool of people is huge, but if you
start with 20 qualified people...you see the issue.

Another related point not addressed is that peer review is fairly demanding
work. You're not just saying "accept" or "reject", but you're obliged to write
a page or two suggesting the strong and weak points of the paper, things to
improve, etc. You need some mechanism for enforcing this (even if it's via
social norms), otherwise review quality deteriorates quite severely, which in
turn affects paper quality.

In lieu of this article, I'd recommend seeing lectures by (or essays by)
Michael Nielsen, who's thought about this problem in much more detail and (I
believe) has a much better understanding of the way things work.

<http://michaelnielsen.org/blog/essays/>

~~~
RichardPrice
Thanks for your detailed thoughts.

Peer review is already evolving. Increasingly academics discover their papers
via search engines (Google, Google Scholar), and social platforms (blogs,
email, arXiv, Academia.edu, Twitter). The usage of these channels as discovery
channels for research shows that the metrics that drive those channels are
doing a good job at expert curation - i.e. surfacing good content.

On Google Scholar, it's citation counts that drive the ranking, and therefore
the discovery process. This is the "crowd review" process I talked about in
the post. With social platforms, it's recommendations from friends and
colleagues that drive the discovery process. This is the "social review"
process I talked about in the post.

In the pre-web days, people used to walk down to their libraries to read the
latest edition from a particular journal. In those days, the journals did
drive a lot of the discovery process. Nowadays, pretty much all discovery of
research papers happens on the web, and most of the discovery is via channels
such as search, and recommendations from colleagues over social &
communication platforms.

You are right to think about the incentive systems that drive different kinds
of peer review. In the case of the social review process, you want to share
good recommendations with your friends, not bad ones (this is why Twitter and
Facebook work as discovery channels). In the case of the crowd review process,
you want to cite good works that your work genuinely built on, not bad work.

~~~
flom
My understanding is that most publications are not heavily cited, and address
very specific problems. I'm curious as to how you plan on creating incentives
for qualified people to review the papers in their field which focus on
problems outside of their present research interests.

To give a personal example: I used to work in a research group that applied
techniques of a field called geometric control theory to the problem of
modeling flocks of birds. There are not that many geometric control theorists
in the world, and their interests range from robotics, to circuits, to
modelings schools of fish. Right now, there are journals of control theory
that force their reviewers to review all submissions, regardless of their
personal interests.

How do you plan on emulating that? In other words, there need to be incentives
for geometric control theorists who work on modeling schools of fish to spend
a few days of their time grinding through the proofs of some grad student's
paper in modeling bird flocks, which he needs for his thesis.

~~~
RichardPrice
Peer review often works the worst when a journal asks someone who's not that
qualified to peer review a paper. See the comment below from billswift about
an unbelievable case of peer review going wrong, where the reviewers weren't
in the research area of the target article (you need to follow the links to
get the full picture).

Something doesn't have to be heavily cited for it to start getting traction,
just as a website, or blog post, doesn't have to get heavily linked for it to
get some traction. A friend might find the paper and share it via Twitter,
their blog, or Academia.edu; this is how the social review process works on
Twitter and Facebook. It might sound slow: you might wonder 'won't it take
ages for a paper to get traction like that?'. But in reality, when you have
social platforms like Facebook and Twitter, content can be surfaced, and whip
around the world, at incredible speeds. The journal peer review process, by
contrast, is several orders of magnitude slower.

The web is known for having a long tail of content: lots of content that
appeals to certain niches of people. It is part of the magic of the web's
discovery channels that this long tail content gets routed to the niches who
care about it - via search, Facebook etc. The fact that academics
overwhelmingly use the web now for research discovery is testament to how good
a job the web's discovery channels are doing at surfacing good content in
highly niche areas.

~~~
tensor
As a researcher myself, I do not use the web as a means of content discovery.
I do use google scholar, pubmed, and citeseer, but these are only useful
because they restrict the search to just peer reviewed journals (and
optionally patents in google scholar case).

Trying to use general web search is often far too noisy. Often times I find
interesting papers on page two, three, or four of google scholar. In a general
web search these same papers might be on a much later page, or perhaps buried
so far that they don't come up at all.

As others have said, there is also much more to peer review than discovery.
Peer review is additionally intended to help authors improve work that isn't
quite up to standards. Academics do peer review for free because they
recognize the value of it and because they are asked directly by the editors
to do it. In your model, what would be the incentive to look for new papers
and give reviews? How would you handle old versions of papers with mistakes or
papers that are of insufficient quality? The peer review process currently
filters these intermediate stages of paper writing. In your proposed model,
you would potentially have many versions of the same paper that a user would
then have to filter through, in addition to a very great many papers that are
of very low standard.

The quality of content on the web and its curation is a _very_ poor standard
to compare academic literature to. Many users are happy enough to be able to
filter out obvious spam pages let alone judge quality to the level needed in
academics.

~~~
RichardPrice
It's true that the papers in the indexes of Google Scholar, Pubmed, and
Citeseer are peer reviewed, but what that shows me is that what is really
driving the discovery process is the ranking system in those search engines:
i.e. the order in which the hundreds of results for a given search query show
up.

One of the drawbacks of the existing peer review process is that academics
don't get credit for their reviews, nor do the reviews see the light of day
for others to benefit from. I expect that there would be significantly more
discussion, and reviews, of papers in the future if there was a credit system
that allowed people to get credit for reviews and comments they made of
papers. I think that credit system is possible and will be built.

There is an interesting question regarding the immutability of content. Right
now, once you publish a paper, you can't edit it, or delete it. It's an
immutable piece of content. Before the web was established, there was a line
of thinking, developed by Ted Nelson, according to which the internet should
evolve like that, and that a link should always work: once some content is
posted, it can never be taken down. Most people are probably glad that the web
developed along the lines of Tim Berners Lee's thinking, rather than Ted
Nelson's, and that they are now free to edit and delete content they have
posted. I think similarly people would appreciate being able to update a paper
in response to a comment they have received. The author is better off, and so
are subsequent readers, as they find themselves reading a more evolved and
advanced paper.

~~~
tensor
Reviewers are _intentionally_ hidden from view. The reason is to prevent
social or political backlash from a poor review. In fact, it is sometimes the
case that papers reviews are _double blind_ , the reviews do not know the
names of the authors either. This is also to prevent bias on the part of the
reviewers. This secrecy is a positive and important part of the peer review
system. Science should not be politically or socially biased.

The immutability of published works is also crucial. It routine for writers to
leave out details covering in prior works. This saves immense amounts of time
on both writers consumers of scientific works. However, it also means that it
is crucial that all cited works be preserved forever. If a document truly goes
missing, then entire lines of work become incomplete. Papers can and are
infrequently withdrawn, but as far as I know, the work is not erased, but
merely marked as bad.

The way updates or corrections are made is via newer papers revisiting topics.
But it remains that at every step some amount of decent due diligence is done
to correct errors and not clutter up the records with incomplete versions.

As great as the web is for unstructured content, you cannot easily apply it to
every area, and especially not to scientific publications. There are plenty of
other examples of curated sources on the web that crucial. Map systems,
curated databases of restaurants, directories of people like LinkedIn and
Facebook, and even Wikipedia can be counted a curated system due to its system
of editors.

Scientists have always and are still free to share data, white papers, and
whatever else outside the peer review system. The main reason peer review is
still here is that no suitable alternative has ever been proposed that
addresses all the points that peer review does. There _is_ a push towards open
access journals to benefit the world at large though.

I understand that you are passionate about this, but I'm not convinced by your
arguments.

------
tylerneylon
Summary: Current peer review is not 100%; we need to think in terms of
authors' motivations (tenure); try a hacker news model, possibly layered with
badges-of-approval from editorial board.

The quality of peer review is only as good as the editorial board and
reviewers make it -- often this is a lower standard than we expect. This
article lists several extreme examples of poor practices by a
(previously-)prominent publisher:

[http://blog.mathunion.org/journals/?no_cache=1&tx_t3blog...](http://blog.mathunion.org/journals/?no_cache=1&tx_t3blog_pi1%5BblogList%5D%5BshowUid%5D=30&tx_t3blog_pi1%5BblogList%5D%5Byear%5D=2012&tx_t3blog_pi1%5BblogList%5D%5Bmonth%5D=02&tx_t3blog_pi1%5BblogList%5D%5Bday%5D=05&cHash=a2d6424f899302a7ea3b75b9bb591802)

The key to shifting academic quality control away from the present archaic
model is through author's motivation. The replacement needs to be a system
that improves and filters incoming papers, and at the same time earns at least
as much prestige for the authors who participate.

One idea is something like a combination hacker news and stackoverflow with
academic articles in place of news articles / technical questions as the
object of discussion. Papers can be changed in response to comments. High
scores for either a paper or an author on such a site could replace the
prestige granted by acceptance into a top-tier journal. Points are awarded for
reviews, and researchers are respected for their scores (closer to tenure), so
there is more positive motivation to review than in today's system (which is:
it's rude not to review when asked).

I think that alone could work. Some academics might balk at a lack of
editorial boards. Fine, let's allow some editors in the mix. An editorial
board is an aggregator. They choose what content fits their filter, and may
open a discussion with an author on improvements, possibly bringing in
reviewers. In the end, readers will see a prestigious badge of the board's
approval on high-quality articles, and there could be a page or rss feed to
browse the articles meeting this standard. This can all exist on top of the
hacker news / stackoverflow system, which itself could be built on top of an
arXiv-like system.

Several of these ideas are motivated by thoughts from the blogs of Tim Gowers
and Michael Nielsen.

~~~
flom
I don't think the hacker news / stackoverflow model will work for research
papers. People read hacker news articles because they want to keep up with
what's going on in the tech community, even if it's not super related to their
particular job. Researchers generally don't just read research papers for fun.
Rather, they are working on their own research, and they are looking for
relevant papers whose results they can use. Hence, it's critical that when
they read a paper, they know they can trust the results.

I can skim an article posted on hacker news in 30 seconds and have a general
idea of what the content was. I can also skim a research paper, but only if I
know that the research was done well (e.g. all the proofs are correct, or the
assumptions in the model made sense, or all correct variables were properly
controlled in the experiment). If that stuff isn't guaranteed, the paper is
useless because I don't know if I can use it for my research. And to figure
out all that stuff myself would take a few days literally out of my research.
Just because 10 people skimmed it, thought it sounded cool, and up-voted it
does not mean the results in the paper are correct.

Other than this review aspect, I really like this product. If they can figure
out a good incentivized review system, I think it will be a big hit.

~~~
tylerneylon
You have some good points.

Perhaps reviewers could publicly vouch for an article to give it credence.
Reviewers, who are also researchers, can have scores based on their own
papers. If a reviewer with a high score publicly vouches for an article, that
article would receive points in favor of being correct.

The most interesting time in writing a paper is when it's under review before
being accepted. The process is extremely similar to debugging an app shortly
before a release. Part of my motivation for the proposed design is to upgrade
this to an ongoing process -- analogous to the move from rare-release desktop
software to dynamic web apps. If the author cares about their paper and
receives useful comments, then the paper can iterate on that feedback.

I agree that many researchers in practice hear about papers they want to read
through word of mouth. For this use case, I like to compare an online platform
to stackoverflow (SO), in that you most often visit stackoverflow to solve a
focused problem you have, as opposed to browsing for fun (like hacker news).
You get an external problem to a page (a math paper / an SO question), you
check it out, and you learn what you wanted to learn, or you contribute your
knowledge or ask a question. It's a mini-community around a particular idea (a
paper or a question). I see this as the main use case. The analogy with hacker
news was to imply that the platform could be built around links to arXiv
papers in the way HN is built around external news stories (while SO is more
self-contained). Like HN, it doesn't seem necessary for an author to submit
their own paper to the site.

Finally, I would say that some researchers do browse for fun. I enjoy reading
several mathematicians' and computer scientists' blogs, and occasionally visit
mathoverflow. This would be a secondary use of the platform, but it would be
fun to have a front page of popular papers, and per-topic subpages.

------
billswift
For an example of lousy peer review, I posted this yesterday on LessWrong:

A new science journal recently published a seriously crackpot paper, this has
the abstract a link for the pdf. I first heard about it from Derek Lowe, who
has also written two follow-up posts. The first has a couple of links
discussing how news of the paper spread, while the second includes a link to
the journal making excuses for why they published it.

<blockquote> Moreover, members of the Editorial Board have objected to these
papers; some have resigned, and others have questioned the scientific validity
of the contributions. In response I want to first state some basic facts
regarding all publications in this journal. All papers are peer-reviewed,
although it is often difficult to obtain expert reviewers for some of the
interdisciplinary topics covered by this journal. I feel obliged to stress
that although we will strive to guarantee the scientific standard of the
papers published in this journal, all the responsibility for the ideas
contained in the published articles rests entirely on their authors.
</blockquote>

I included the links to all of Derek Lowe's posts because they have other
interesting links, including in the comments.

The permalink is here
[http://lesswrong.com/lw/9p9/open_thread_february_114_2012/5t...](http://lesswrong.com/lw/9p9/open_thread_february_114_2012/5trq)
, so you can check out the links I included.

~~~
RichardPrice
Fascinating example of peer review not working out. It reminds me of the Sokal
hoax, except that this one doesn't look like a hoax.

------
wybo
Also see the now finished LiquidPub project by the university of Trento,
Springer, and others, for more ideas and views on peer review in the digital
age:

<http://liquidpub.org/>

This paper about journals:

[http://wiki.liquidpub.org/mediawiki/upload/9/9b/Liquid-
journ...](http://wiki.liquidpub.org/mediawiki/upload/9/9b/Liquid-journal-
proposal_v0.13.pdf) (Liquid Journals)

And this paper about peer review:

<http://ubiquity.acm.org/article.cfm?id=1226695> (Publish and Perish)

Though in a way the differences between peoples descriptions and views are
details mostly, as what is clear is that peer review will change, and that it
will move away from publisher-control.

Imho the biggest difficulty will be the social aspect; breaking the feedback-
loop between journal-publications and perceived academic credit. Because until
individual scholars can escape from the social trap of journal-publishing &
handing over their copyrights, without harming their careers, they will be
unlikely to do so.

(<http://en.wikipedia.org/wiki/Social_trap>, and a great article on social
traps:
[http://www.agls.uidaho.edu/agec467/agec467/Other%20readings/...](http://www.agls.uidaho.edu/agec467/agec467/Other%20readings/Platt.PDF))

~~~
RichardPrice
Wybo - thanks for these links. I totally agree that breaking the feedback loop
you mentioned, that binds academics to the current journal system, is key. I
think the central things that needs to happen are: building an alternative
discovery system (already happening with the web), and building an alternative
credit system (happening with citation counts, and other metrics, but is
lagging behind the growth in alternative discovery systems, and needs to be
accelerated).

~~~
femto
I wonder if a distributed algorithms, as used in web based currencies (eg.
bitcoin) or incentive based systems (eg. bit torrent), can be adapted to the
peer review system? The currency being traded would be academic credit.

------
vladoh
I think it is not possible to achieve the quality of a peer review by "crowd
review" or "social review". While most of the people are able to rate a funny
picture or an interesting article on Internet, the review of an academic paper
requires much more expertise in the field and knowledge of the related work.
Therefore I don't believe that a truly objective rating can be created. Also,
making a peer review is time consuming because you have to really understand
the paper in detail and think about if the contributions are big enough and
well explained, if the experiments are done properly and if the comparisons
are fare and so on. This is not an easy thing to achieve by just throwing a
lot of people on it (the users of the Internet).

Furthermore, when a paper is published in a well recognized journal or
conference one has a guarantee of the quality of the paper. For example when I
see something published at CVPR (Conference on Computer Vision and Pattern
Recognition) I know it is certainly worth reading. I don't want to imagine how
much more difficult it will be to find the valuable papers among all the bad
stuff that is being submitted for review and rejected.

I don't want to say that the peer review process is perfect, but if I have to
choose between "peer review" and "crowd review" I choose the former.

~~~
RichardPrice
Academics have always based reading decisions on recommendations from
colleagues. This trend has accelerated since email and the web were invented,
as your colleagues can communicate with you through more channels, and more
recommendations go back and forth. Social review always has been a big part of
research discovery, and it's become more important with the advent of the web.

Search engines play a huge role in driving research discovery, and their
ranking systems are fundamentally based on crowd review - calculations using
links, as in the case of Google and Bing, or citations, as in the case of
Google Scholar.

One might wonder in theory whether a search engine could do a good job of
ranking academic papers, or whether recommendations from one's colleagues is a
viable way of discovering papers. But when you look, in practice, into how
academics operate day to day, those channels are the ones that academics
increasingly use to discover research.

------
knowtheory
It's also worth noting that academia.edu is a for-profit enterprise. The
domain academia.edu was purchased prior to the strictures put in place to
ensure that only qualified educational institutions could register .edu
domains, and has been grandfathered ever since. Academia.edu itself launched
in 2008 apparently.

The author of this piece very much has a horse in this game, and as a result I
very much view his writing with a skeptical eye.

There are serious and substantial problems with social models of publishing,
especially when organizations/institutions start tying promotions and/or
remuneration to metrics like impact factor (which itself is a proprietary
formula).

We should keep in mind to reward good science because it's good science, not
just because it's popular.

~~~
RichardPrice
I think you are getting the horse and the cart in the wrong order (to continue
the horse analogies :) ). I started Academia.edu, after finishing my PhD at
Oxford, because I believe that science, is too closed, and too slow, and that
I think that we can build a faster, cheaper, and better system for
disseminating research. You're wondering whether I'm asserting the things in
the post because I started the company, but really it's the other way round. I
started the company because I believe the philosophy expressed in the post.
One has to be the change one wants to see in the world.

~~~
knowtheory
I'm sad to see that someone downvoted factual information provided about you
and the post you wrote.

And I'm not asserting anything about your motivations for why you started a
for-profit company. I agree with you that the publishing models we have
currently are broken. At the same time, I'm not entirely happy with some of
the changes that have taken place in the past 5-10 years.

When you look to the changes that digital distribution have wrought in
industries like the news media over a similar time span, the changes are not
all healthy ones either from the perspective of publishers or consumers. The
fact that HuffPo and Gawker have risen to the top of the pile as the sorts of
money making enterprises is not such a great example to set.

Moreover, I'm unconvinced that adding more eyeballs to papers is going to
improve quality of review. If your assertion is that we can make scientific
review faster and cheaper without a subsequent loss of quality (let's just say
"more scalable"), then I'd love to hear your response to a claim i like to
make. I'd assert that publishing scientific papers is more similar to the
trials and travails of online media than it is to publishing source code.

The thing that makes open source software work is that software behavior
(broadly speaking) is verifiable. I can run your code and we can agree on the
results, or you can tell me i'm an idiot for not configuring the software
appropriately. I'd assert that FOSS scalability derives from this ability.
More eyeballs helps because everyone is working from/over the same artifact.

The same can't be said for most academic papers, because most academes don't
even publish their raw data, let alone possess have the facilities or the will
to publish verifiable algorithmic processes for arriving at their data.

I'd love to see a project which encouraged better replicability and
verification of results/test data. But you haven't suggested any of those
things.

P.S. absolutely, please do be the change you're looking for. But I'm not
convinced that you're the change _i'm_ looking for.

------
merraksh
_The drawbacks of the Two Person review process are that it is [...]
expensive: $8 billion a year is spent on subscriptions to journals, which is
money that could be spent on more research._

None of that money goes to the two reviewers. The reviewers could do the same
work for open access journals, and they would have (IMHO) no motivation not to
do it for free.

------
jfmercer
Am I the only one that finds it ironic that TechCrunch is writing about peer
review?

