
Robot-written reviews fool academics - chavo-b
https://www.timeshighereducation.com/news/robot-written-reviews-fool-academics
======
GuiA
I'm not very surprised. If you've ever reviewed papers for an academic
conference, you'll find that the vast majority of them are just very bad. The
average ACM conference has a ~20% or so acceptance rate - and the remaining
80% isn't just a hair away from being accepted. For a big chunk of it, it's
just garbage.

Ill defined research problems, vague statements, poor methodology, many
grammatical mistakes... given the nature of peer review, it's only natural
that people who author nonsensical papers would nod at nonsensical reviews.

For people saying that this is because academia is an old boys network: not
quite so. While it can definitely be like that when you get to the top, the
vast majority of peer reviewers for most conferences are just grad students,
post docs, or junior researchers who don't really discriminate by trying to
guess who wrote the paper.

~~~
santaclaus
Weird, I've had a very different experience reviewing for two big ACM
conferences. The rejected papers are usually a bit too narrow of a
contribution, but you'll usually see them appear in respected smaller venues
soon after. Most of the horrible unacceptable work gets culled out in earlier
passes before reviewers even see it.

~~~
more_original
ACM has a wide range of conferences. There are conferences with an A* ranking,
but also ones with a C ranking. Submissions to A*-conferences are not 80%
garbage.

[http://portal.core.edu.au/conf-
ranks/?search=acm&by=all](http://portal.core.edu.au/conf-
ranks/?search=acm&by=all)

~~~
GuiA
Your link shows that over 75% of the conferences are B or lower. So sure, the
submissions to A* conferences are not 80% garbage, but these conferences are
4% of the total. Not really representative of academic work as a whole.

~~~
eli_gottlieb
Oh hey, cool! My paper writing up my MSc thesis was at a B-grade conference
with a 26% acceptance rate. That's actually more selective than I expected,
given my opinion of my own work.

------
CobrastanJorji
If you generated papers with MIT's SCIgen, reviewed them with this, and then
responded to those reviews with short notes from Cleverbot, you could put
together a whole collection of "peer-reviewed", fee-charging journals. You'd
want to make sure they cross-cited each other heavily, thus earning the
journals and their "authors" prestige.

~~~
digi_owl
Now to set this up to discredit the big issues of the day...

------
more_original
The quality of reviews is a general problem with the reviewing system. Just
think about it: You're very busy with other work, you have many reviews to
write, about papers that you're not probably really enthusiastic about, and
reviews are anonymous to all but the editors. It shouldn't be surprising that
this can result in rushed, bad reviews. It's also not too surprising that one
can automatically generate a 'review' that looks like someone who forgot about
the reviewing deadline and wrote something in a rush.

Nevertheless, while bad reviews do make it through, I do think the editors are
able to recognise them for what they are.

------
jfroma
It reminds me the "Postmodernism Generator"

[http://www.elsewhere.org/pomo/](http://www.elsewhere.org/pomo/)

~~~
osullivj
And the Sokal affair [1]

[1]
[https://en.wikipedia.org/wiki/Sokal_affair](https://en.wikipedia.org/wiki/Sokal_affair)

~~~
jknoepfler
And the reverse-Sokal affair [1]

[1]
[https://en.wikipedia.org/wiki/Bogdanov_affair](https://en.wikipedia.org/wiki/Bogdanov_affair)

------
SixSigma
"Markov chains still produce credible sentences after 30 years" reports Mark V
Shaney

[https://en.wikipedia.org/wiki/Mark_V._Shaney](https://en.wikipedia.org/wiki/Mark_V._Shaney)

~~~
thaumasiotes
But Markov chains have never produced credible sentences. They produce crazy
schizophrenic sentences that begin talking about one thing and end up talking
about something entirely different because they went on farther than their
lookback. Just look at the example in your link:

> It looks like Reagan is holding back the arms of the American eating public
> have changed dramatically, and it got pretty boring after about 300 games.

That's not going to get past anyone who reads it in any context. The method
here was to copy entire well-formed sentences from existing reviews.

------
blazespin
I'm surprised this hasn't been done for math where papers actually can be
completely accurate.

------
coldcode
I would like to see the opposite too, identify fake reviews written by robots.
Otherwise people will just start using fake review robot sites so as to not
spend any time.

------
JoeAltmaier
Confirms the bias in the bogus peer-review bureaucracy: editors accept papers
from those they know, without reading the recommendations. Its an old-boys
network?

~~~
bbctol
It seems much more that this is due to the rise of predatory publishers, where
editors are willfully lax on standards, as the goal is not to make money but
sucker scientists out of open-access publishing fees. Many prestigious
journals just blind editors to authors anyway.

------
gaius
This is how the ivory tower crumbles - as it becomes common knowledge that
scientists are as gullible as anyone, if not more so.

~~~
SubiculumCode
Reviewers make specific and general comments. The general comments are usually
ledd important but serve as general reminders, for example, to check for
typos, to bettersituate your findings in context, and the positive aspects of
having converging, objective measures. These generalities usually cut across
disciplines and take no real expertise. It is thr specific reviewer comments
that make or break a paper. These fabricated reviews do not produce specific,
intelligent reviewer comments, just the general comments that sane scientists
do not resist or think much about.

