
Peer Review - haltingproblem
https://rodneybrooks.com/peer-review/
======
higeorge13
I don't consider myself an expert in computer networks research but i still
have 10 accepted (and ~20 rejected) papers with ~100 citations in journals,
conferences and workshops within my 3 years working on R&D. Hence my comments
below derive from my non-academical and more development-oriented background.

Peer review is completely broken nowadays for the following reasons:

-Reviewers might not have relevant expertise to judge whether a paper is good or not. I am not talking about top-notch conferences, mainly lower-tier ones and workshops. I have seen a lot of professors pushing paper reviews to their PhDs, or even students. I was also invited to review a paper as soon as i submitted my first one (and it was rejected btw)! In another workshop i was invited to review papers just because i was a colleague of one of the organisers.

-Old-school academia. This might not apply to all fields or all academia, but i have had good papers rejected because they didn't have a lot of math or simulations! My paper was examining an actual implementation of an SDN (software defined networking) protocol or a strategy with platforms and orchestrators which require weeks to setup and implement (Opendaylight, Openstack, OpenMANO, etc) with actual experiments on real users' video activity, yet to be rejected because i didn't provide any simulation. Jeez, novelty does not come only from theory, somebody has to implement and test these things in actual machines..

-Politics. I won't say much on this aspect, other than a colleague of mine had 1000 citations within the same time just because there was a "mafia" (not my words but his) of professors, lecturers, PhDs reviewing and accepting all the group's papers because they were overloading them with each other's references and co-inviting them to each other's conferences and workshops.

~~~
jseliger
From the original piece: _I don’t have a solution, but I hope my observations
here might be interesting to some._

I have a partial solution: researchers "publish" papers to arXiv or similar,
then "submit" them to the journal, which conducts peer review. The "journal"
is a list of links to papers that it has accepted or verified.

That way, the paper is available to those who find it useful. If a researcher
really thinks the peer reviewers are wrong, he/she can state why, and why
they're leaving it up, despite the critiques. Peer-review reports can be kept
anonymous but can also be appended to the paper, so that readers can decide
for themselves whether the peer reviewers' comments are useful or accurate.

~~~
currymj
This is how many CS conferences already work (even with public anonymous peer
review, go to openreview.net to see it in action). It does seem to be better,
although the frustration level for researchers is not really less.

Edit: also, in CS there is a norm that reviewers will not go out of their way
to figure out if a submitted paper has been previously posted on arXiv. I
don't think it's possible to have perfect anonymity without forbidding
researchers from publicly talking about work in progress, so this seems like a
reasonable compromise.

------
haltingproblem
Rodney Brooks subsumption architecture revolutionized robotics. Back in the
90s you could replicate his work with Attila and Genghis and recreate hexapod
robots for under $1000 which was an pittance for a robot.

He was at MIT when he submitted those papers. The churlishness of those
reviews is staggering. What if he had given up and gone on to something else?

~~~
p1esk
_What if he had given up and gone on to something else?_

Maybe he would have accomplished even more?

~~~
haltingproblem
What would have happened if he had moved on to another field? Subsumption
architecture would eventually have been discovered by someone else, standing
on the shoulders of giants and eventuality of all discoveries etc.

Or it might have taken another 50-100 years. Fundamentally unknowable.

------
simonkafan
Peer review is broken. More than once I've learned that papers get easier
through peer review if you cite the paper of a reviewer (of course you don't
know in advance who reviews it so it's basically luck). I often got comments
about missing related work "from relevant authors".

~~~
_emacsomancer_
I'm not sure that's exactly the reason it's broken. Usually an editor will
give a paper to people who are specialists on your topic, and they will have
thus likely written papers related to the topic as well.

One major issue is that, because of this, they will sometimes have a vested
interest (conscious or not) in blocking the publication of your paper if it
shows flaws in their own.

~~~
jhrmnn
Many journals I'm familiar with allow you to suggest referees, as well as
suggest which researchers should be avoided as referees. If I'm publishing
something that is competing in some sense with the work of other people, I
will usually suggest to the editor to leave them out from the peer review
process.

~~~
_emacsomancer_
This has only been occasionally available in my experience, and itself
introduces perhaps worse problems. If I'm explicitly arguing against someone's
theory, they are likely to be a relevant judge of the merits/demerits of the
arguments.

Having a good editor who can discern the difference between a reviewer's
relevant responses and 'turf warfare' is probably best one can wish for.

------
INGELRII
There are two types of peer review.

1\. _Pre-publication peer review_ Reviewers examine the work before it's
presented to the rest of the people.

2\. _Post-publication peer review_ Paper on an online forum where others can
read and comment and cite them. Papers can be accepted into journals and go
trough peer-review later.

Post-publication peer review does not mean that authors can't use pre-
publication peer review. Scientists can (and should be encouraged) to ask
their colleagues to review papers before posting online to spot errors.

Some fields have long tradition to publishing and circulating working papers,
manuscripts, drafts, posters etc. years before they submit the final paper for
publication and pre-publication peer-review. That works as well.

------
ChrisMarshallNY
I think peer review is quite important.

Note the word “peer,” though.

I would not peer review papers on robotics or AI, as those are not my forté. I
could review papers on some generalizations of software engineering or the
Swift programming language.

Editing is a different matter. My mother was a scientific editor, and
absolutely _brutal_. She edited a book I wrote on writing software for a
certain specialty of Web site, and it was quite humbling. She knew nothing at
all about the technology, yet provided an immensely valuable service.

The book is long past it’s “sell-by” date, yet stands as maybe the only
“perfect” artifact of my life, thanks to her.

Then there’s comment threads in venues like this one, which some might equate
to “peer review.”

There was an old Dilbert comic on Peer review. I don’t think it can be found
online, but was in _“Build a Better Life by Stealing Office Supplies.”_

It was entitled “The Joy of Feedback,” and probably applies to comment
threads.

------
papeda
Peer review is flawed, but I don't think it's _broken_. For context, I'm a
late-year PhD student in machine learning theory, and I submit almost entirely
to conferences, which is the norm in my field.

To me, the big problem is that there is little incentive for experts to review
papers. I write a few papers every year, and I review 5x that number. Slightly
more senior people -- postdocs and junior faculty -- may review more like
30-40 papers. Most papers I review are not good papers. This is especially
true at the big machine learning conferences like NeurIPS, ICML, and ICLR,
where a disconcertingly large fraction (1/3?) of submissions are outright bad.
Most of them are at best "meh". So there's not much benefit to the reviewer
beyond some notion of community service.

I think my ~colleagues (PhD students, postdocs, young faculty) have similar
opinions. We've certainly discussed these issues in the past. One positive
step NeurIPS has taken is giving "best reviewer" awards that come from with a
free registration. That's a nice benefit, even if it won't on its own
incentivize good reviews. Some people have suggested outright paying people
for reviews at some quasi-competitive rate, maybe $100 per review (note that
this is not great -- a good review takes at least several hours if the paper
is not terrible). NeurIPS also added a requirement this year that all
submitting authors must review, which is an interesting experiment.

My impractical wish is that we also have some kind of "worst reviewer" sticker
for all the reviewers who just paraphrase the abstract, add some milquetoast
feelings on novelty, and weak reject. Less facetiously, some kind of
StackOverflow reputation system might be useful. As is there's so little
external signal about being a good or bad reviewer that the incentives are
pretty wrong. Some kind of points system might help.

But my overall point might be: reviewers by and large aren't malicious,
jealous people who are only rejecting your work because they want to protect
their own turf. They are more likely good faith actors who are trying to
balance time spent on a pretty thankless but societally useful job (reviewing)
with time spent on much more personally enjoyable and rewarding activities
(research, family, hobbies, etc). There aren't really obvious solutions to
tilt the balance toward better reviews that can still handle the volume of
papers.

~~~
JoeAltmaier
Dunno about paying. Remember Amazon Turk? Folks just spammed it with zero-
effort submissions and 'closed' thousands of contracts with bots. Instantly
gamed, so some jerk could walk off with thousands, $1 at a time.

The shaming idea, that could work.

~~~
canjobear
Mechanical Turk is still useful, though. And there’s a big difference between
paying a bored rando $1 to identify pictures of frogs and asking an academic
who is highly invested in the relevant ideas to review a (hopefully
interesting) paper.

~~~
JoeAltmaier
But is it different in the, nobody-will-scam-it way? Remember, we're talking
starving academics. They don't have a great history of philanthropic behavior.

------
buboard
That's peer review from 3+ decades ago. Things have changed substantially. PIs
dont have enough time to review all that stuff, lot of it is delegated to
students, reviews can be hasty, with reviewers who may latch on to a specific
detail and let the rest pass through. It's not even that they are not trying
hard enough - Peer review is not doable in 2020. If you honestly want to
scrutinize and improve every detail of a manuscript, you re better off posting
it in reddit or twitter.

> clamor for double blind anonymous review

That's not even possible in most fields. you can easily tell which lab it
comes from (you can even often guess who the reviewer is)

------
chrisco255
Any ideas for how the system might work differently? Could peer review learn
anything from the open source community?

~~~
Vinnl
Currently, peer review fulfils three distinct functions, to the detriment of
all of them:

1\. Acting as a filter for what research is worth reading.

2\. Providing input to the author on how to improve their work.

3\. Influence the author's professional evaluation.

The first is the most blunt instrument: when something is not considered worth
reading, it doesn't get published at all. This is problematic in the cases
where the peer reviewers misjudge, or when an article is evaluated on more
than its academic merit, e.g. whether it's "groundbreaking" enough for a
particular journal.

The second is useful, but made less so due to the power imbalance - an author
cannot judge for themselves whether the feedback makes sense in the context of
their work. This is especially problematic when someone has to take their work
to multiple journals and/or gets conflicting feedback.

And that power imbalance is the result of the third point: it can be more
important to the author to get that stamp of approval by the reviewers than to
meaningfully contribute to science, because that's the only way they can stay
in academia.

Ideally we'd split them up, but that does require aligning the academic
community - which is a hard problem, especially considering there are strong
vested interests in the status quo.

(Disclosure: I do volunteer for a project
([https://plaudit.pub](https://plaudit.pub)) aimed at splitting evaluation
from publication, because I think it's an important issue.)

~~~
albertzeyer
Plaudit is nothing more than like an upvote, right? I don't think this is
really helpful. We already have a very similar metric, which is citation
count. Which is definitely a useful metric.

At first, I thought that Plaudit adds some post peer review platform, or
discussion forum, for each paper. Similar to OpenReview but for any paper. I
think this might actually be a great idea.

I think that
[https://fermatslibrary.com/librarian](https://fermatslibrary.com/librarian)
actually provides a similar functionality. But it has not seen much adoption.

~~~
Vinnl
It's not a metric, it's an endorsement. Think of it like being published in a
journal: it's not about in how many journals an article gets published, but in
which ones. Likewise, what's important here is _who_ has endorsed it. Instead
of a journal name acting as a proxy for "someone I trust thinks this work is
worth reading", you can directly see the name of that person.

(That said, a big challenge with citation counts is that they take a long time
to accumulate.)

As I mentioned, I think the three functions should be separated. Although
Plaudit by itself does not facilitate giving feedback, it does not prevent it
either. Ideally, if someone comes across e.g. an error in an article, they
provide that feedback to the author (e.g. through an email, of using
Hypothes.is - a project that I think you might like). There's no need to only
contribute to improving scientific literature through a formal journal-
assisted peer review process.

And who knows: if the author incorporates that feedback, the giver of the
feedback might decide to endorse that article using Plaudit :)

------
albertzeyer
This topic comes up again and again. (Here some references from a quick
search:
[https://news.ycombinator.com/item?id=10531374](https://news.ycombinator.com/item?id=10531374)
[https://news.ycombinator.com/item?id=18523847](https://news.ycombinator.com/item?id=18523847)
[https://news.ycombinator.com/item?id=8731271](https://news.ycombinator.com/item?id=8731271)
[https://news.ycombinator.com/item?id=18595074](https://news.ycombinator.com/item?id=18595074)
[https://news.ycombinator.com/item?id=22251079](https://news.ycombinator.com/item?id=22251079))

But there are not many good solutions.

Most seem to agree that we should move over to a more open system, like
OpenReview, and also that all publications should be open for everyone
afterwards, and not behind some paywall. But these are only two aspects of the
system, and this is less about the peer reviewing itself.

The community already has the trend to publish more and more directly on
ArXiv. Just in addition to submitting it to a conference. If the conference
peer review filters it out for some unlucky reason, it's still open for
everyone to see. That might be already an improvement of what we had before.
But probably it's not really the optimum yet. But what is the optimum?

~~~
jseliger
There are good solutions, but existing publishers like Elsevier like the
current situation because it makes them rich
([https://www.universityofcalifornia.edu/press-room/uc-
termina...](https://www.universityofcalifornia.edu/press-room/uc-terminates-
subscriptions-worlds-largest-scientific-publisher-push-open-access-publicly)).
Any existing individual academic is strongly incentivized to work within the
rotten system, even though most know its pathologies and want to defect—but
one person defecting alone will screw up their own career while likely leaving
the system intact. The equilibrium is bad even though the solutions aren't
complicated.
[https://news.ycombinator.com/item?id=23291869](https://news.ycombinator.com/item?id=23291869)

------
specialist
I love this topic of peer review. Feels like arguing about methodologies,
software QA & test, learning organizations back in the 90s.

Peer review and the reproducibility crisis are the absolute bleeding edge of
the human experience. Science, governance, transparency, accountability,
progress, group decision making. All of it.

I encourage all the critics, thought leaders to also consider how to apply
these lessons elsewhere.

Every thing said about peer review also applies to policy making, product
development, news reporting, investigative journalism.

Peter Drucker, way back when, stated that management is a technology. Same
applies to peer review and reproducibility.

For whatever reason, this post prompted me to think "sociology of scientific
progress". I encourage critics to also factor in the collective human
psychology. Covered by so many books, like the classics Structure of
Scientific Revolutions and The Diffusion of Innovation. (Sorry, none of the
more contemporary titles, a la Predictably Irrational, are jumping into my
head at the moment.)

------
haltingproblem
Brooks initial papers like "Intelligence without representation", "Elephants
don't place Chess" and robots like Ghengis and Attila changed the robotics
landscape. It was a leap akin to going from a Mainframes to an Apple II.

Brooks was extraordinarily well placed to lead this switch in Robotics. He was
at MIT, a place with very talented engineers - mechanical, electrical - etc.
who could work on the robots. MIT also had limitless funding and a culture for
disruptive research with long payoffs. This might not have been possible at
other places like CMU where robotics has more industrial applications. I
remember watching the movies of Atilla and Genghis over and over again,
downloaded over an FTP connection!

[https://www.youtube.com/watch?v=-6piNZBFdfY](https://www.youtube.com/watch?v=-6piNZBFdfY)

------
LockAndLol
How is it that peer review hasn't changed in 30 years? With the amount of
people afflicted has nobody had the time and/or influence to improve the
process?

There are no doubt lots of smart people concerned by this...

------
BurningFrog
Science did well before radical ideas could be voted down by peer review.

Would Einstein and Darwin have made it through peer review? I doubt it.

What current day people like that are held up by their "peers"? I fear we'll
never know.

~~~
pdonis
_> Would Einstein and Darwin have made it through peer review? I doubt it._

Einstein _did_ make it through peer review; his classic 1905 papers were
published in Annalen der Physik, the most prestigious physics journal in the
world at that time, after being refereed and accepted.

Darwin didn't technically go through "peer review" since he published his
results in a book, not a scientific journal. But when he did so, what we now
call scientific journals were extremely rare.

~~~
btrettel
From what I've read the vast majority of Einstein's work was not peer
reviewed.

[http://michaelnielsen.org/blog/three-myths-about-
scientific-...](http://michaelnielsen.org/blog/three-myths-about-scientific-
peer-review/)

> How many of Einstein’s 300 plus papers were peer reviewed? According to the
> physicist and historian of science Daniel Kennefick, it may well be that
> only a single paper of Einstein’s was ever subject to peer review.

------
guptaneil
Peer review is something I've been thinking about a lot lately as an outsider.
I've never gone through a formal peer review process or worked in academia,
but it's pretty obvious that the process has significant problems.

However, the problem that peer reviews are trying to solve (conferring
legitimacy on a paper by the scientific community) is so important that we
can't give up on it. Sure, pre-print servers mean science can move faster, but
it also means anything can be "science." If we normalize publishing papers as
preprints, there's very little stopping conspiracy theorists from publishing a
paper and having it look plausible.

The sudden rush of media coverage for COVID research has really exasperated
the problem. Journalists should responsibly cover preprint studies, but
frankly most don't, and who can blame them when even the scientific community
is struggling to review the papers itself.

I volunteer with a group[0] that's trying to address this problem in a small
way by creating visual summaries/reviews of popular preprints for mainstream
readers. This is a side project, so the review process is just a shared Google
doc for each paper, but what I'd like to see:

As a reviewer:

    
    
      * View diffs between paper revisions
      * Comment on papers inline, like a pull request
      * +1/-1 a submission, or just leave questions without a decision
      * Gain "karma" for good reviews to build my reputation in the community (like Stack OverFlow)
    

As a paper author:

    
    
      * Reply to comments to understand and explain context
      * Post revisions
    

As a reader:

    
    
      * See visual summaries of papers, with inlined reviews (like Research Explained)
      * Vote on both studies and reviews to surface the good ones
      * Browse profiles of authors and reviewers to understand their qualifications/contributions
    

I do think [https://distill.pub](https://distill.pub) is on the right track.
Their papers are significantly more visual/readable than most, and they
require contributors to open a pull request against the distill repo. However,
their process isn't approachable for non-technical fields.

[0]: [https://www.researchexplained.org](https://www.researchexplained.org)

~~~
marcus_holmes
The problem also works in reverse. Probably the biggest example is Plate
Tectonics - this idea took decades longer to be accepted than it needed to
because the generation of expert geologists who were the gatekeepers for the
journals refused to consider it seriously, and the guy who came up with the
idea wasn't a geologist [0].

Scientists are just humans, after all. Subject to human flaws. And the
scientific method works "in the end", not immediately.

[0] -
[https://en.wikipedia.org/wiki/Continental_drift](https://en.wikipedia.org/wiki/Continental_drift)

