
A Cornell professor’s trick for getting 1700 peer reviewed publications - luu
https://andrewgelman.com/2018/11/04/cornell-prof-not-pizzagate-guy-one-quick-trick-getting-1700-peer-reviewed-publications-cv/
======
programmertote
Back in 2012, I worked with a guy in grad school. He was then a grad student
as well--just one year senior than me. He would recruit new incoming grads to
'collaborate' with him on his current projects or whatnot.

The way he operates usually goes like this: he has a hypothesis (something
that is usually trivial in my opinion). Then he recruits grad or undergrad
students to work with him on it. After that, he let his teammates to do all
the development work (coding) and experimentation (recruiting subjects and
sitting with them in the lab to run the experiments). We needed to recruit
human subjects because our work was related to usability of assistive
technologies. By the time the data was collected and analyzed, the guy would
show up with a vague draft of the paper, in which he has liberally cited his
own (and his grad school mentor's) old publications, without knowing a damn
thing about what his teammates did. Then we, as his teammates who ran the
experiments, fill out the experiment and results sections. He then would write
up the conclusion and fill up the rest of the holes. After that, he submits
the paper taking the first authorship (because he came up with the
hypothesis). That's the way he was able to publish >25 papers during 5 years
of grad school (until he got his PhD).

Now that guy is an assistant professor--with ~50 publications--at a well-known
University in one of the midwest states (of the US) at the very young age (I
think he's just above 30 years old). If you look at his publications, he
liberally cites himself and his mentors.

Kudos to him, he knows how to play the game.

~~~
scabarott
I'm in awe of his skills and intelligence. Sounds like someone who knows how
to work smarter. I would have taken that as a learning opportunity.

*edit: Getting a few down-votes on this one. I was being mildly sarcastic but on the other hand it takes some skill and smart thinking to pull this kind of thing off. Also in my experience, bull-shitting is a very valuable and necessary skill at work and in life in general. Arguably it gets you to a position where you can do work of actual substance and get recognition for it.

~~~
mr_overalls
Sounds like you're in awe of someone who has learned to play a game to his own
benefit, but to the detriment of the larger institution (i.e. science) of
which he is a part.

~~~
throwaway5752
This seems to be a common thread of "lifehack" types

------
jillesvangurp
Peer review is broken. The assumption with peer review is that the review
filters out bad research in a fair way. The reality is that it is neither fair
nor particularly effective in preventing bad research slipping through.

Part of the problem is the review quality is almost as inconsistent as the
article quality.

From experience I can tell you that rejecting papers is a lot more work than
accepting them. It requires arguing things in such a way that it survives
editor scrutiny. This in turn requires actually reading the paper, doing some
background checks, etc. Of course editors getting involved also means work for
them. So negative reviews tend to trigger a lot of work. So much easier to
just rubber stamp the reviews that give a thumbs up.

And editors have the problem of needing to fill their journals with stuff.
Nobody reads these things cover to cover. So, there's a lot of filler content
that will never be cited that got rubber stamped by reviewers taking a five
minute glance at the text. As long as the article is not too blatantly bad,
nobody cares. A common practice is to invite accepted authors back as
reviewers. So, there's a notion of bad articles leading to equally bad
reviews.

It is not surprising that there are researchers that game the system by
publishing in friendly publications where the reviewers/editors are known
friendly. They all need to keep their numbers up.

A possible fix could be to review out in the open. Publish reviews along with
the article. Allow scientists to challenge these as well. It's one thing to
say "this is probably fine" anonymously than it is to say, please publish this
with a message stating that I approve and endorse the content of this article.

~~~
snarf21
I am not an insider but could the reviewer turn into more of an endorsement?
Meaning the paper is published and the endorsers are listed as well, e.g.
"Reviewed and endorsed by Jill Jones of Cornell University." Organizations
could then have clauses that if you endorsed papers that you didn't review and
test that you would lose your job. I know that seems harsh but we need to find
a way to get "skin in the game". I don't think having a paid organization
would work and I think having something at the government level would be
worse.

~~~
nicoburns
I like this idea, but I think it might make the whole system collapse, with
few people willing to review papers at all.

~~~
pacala
The ‘nobody wants to do job X’ problem has the usual solution: compensation.

~~~
bdamm
That compensation would come at the expense of research grants. This is not
the same world as commercial development.

~~~
dash2
Some journals do offer compensation - I think the American Economic Review
used to offer £100. The solution hasn't been widely adopted, though, perhaps
because academics enjoy the kudos of being a reviewer (and the chance to
savage other people's work anonymously :-P... ). And while you can pay for
reviews, it is hard to pay for review quality, which is very subjective.

~~~
solveit
Also because a price that would significantly incentivise a senior academic is
a price that is way too high for most journals to pay. If you have prestige, a
decent if not exorbitant salary, bulletproof job security, a project you're
passionate about and want to spend time on, and way too many obligations to
actually spend time on your project, how much would _you_ take in exchange for
adding yet another obligation to the pile, taking time away from what you
really want to spend your time doing?

------
lubujackson
20 years ago, Sergey and Larry used the scientific publishing idea of tracking
citations to determine document importance and built a better search engine.
When people realized that links mattered more than anything, they started
spamming links all over the place and the SEO battles began.

Glad to see that the scientists have caught up.

~~~
sytelus
In reality PageRank is surprisingly resistant to such effort. The SEO
typically involves bit of social engineering and hacking (example convience
well known website to link you or spam the comments of unpatched blog engine).

The h-index, the measure used by researchers, is surprisingly gamable on other
hand. For example, two researchers can buddy up and self cite each other few
hundred publications in junk journals with DOI and will suddenly become the
most prominent scientist of all time. It is mind boggling that even Google
Scholar doesn’t use pagerank.

~~~
vanderZwan
If you're talking about PageRank anno 2018, then you're talking about two
decades worth of research fighting back against gaming SEOs. I wouldn't use
say the resulting resistance is all that surprising.

~~~
sytelus
My point was gaming score using self citations. This is almost trivial with
h-index and much more difficult with PageRank. PageRank is superior in
resistance compared to h-index.

------
conjectures
Getting increasingly cynical on this topic. Given the prevalance of
introductions in popular media like, 'Professor X is a world renowned expert
with O(1000) publications'. Which my brain translates as 'Professor X probably
barely read everything with their name on it.'

There are precious few exceptions to this rule unless they are very old or are
Paul Erdos.

I'd much rather hear 'Professor X's most important contribution was Y.'

~~~
YeGoblynQueenne
I don't know about that. There's no doubt it happens that people artificially
inflate their citations, but how often? The fact that this cases like this are
reported as something exceptional says that they are, most likely,
exceptional, or at least surprising for the people who point them out (which
again means that they are maybe not that usual).

In any case, if this kind of thing is reported in the popular press, blogs,
etc, it's because a scholar caught it and reported it, which means there are
checks and balances that don't let this behaviour run wild all the time.

And then again, there's going to be differences between fields- some will
suffer from this kind of problem more than others, etc. There's no reason to
"tar everyone with the same brush".

~~~
conjectures
I didn't mean to imply outright plagiarism everywhere all the time. I think
there are many ways to get inflated numbers.

If a PI has 3 collaborator PIs and each PI has 4 students writing a paper
every three months then the PIs can get 48 publications a year. Over 20 years,
there's your O(1000) publications in a model with no substantial PI time
contribution.

Given teaching loads, home life, university admin, academic duties like
reviews I'd be surprised if a senior academic got 3 days per week to do
research. I'd say at that rate the 4 first-author level contribution
publications per year model would be optimistic. So a more realistic cap on
the quality work someone could do in 20 years might be 80 publications.

~~~
gervase
I've worked in a similar system to what you described, with the top-most PI
contributing essentially zero to publications directly. However, their
fundraising efforts were singlehandedly paying the stipends/salary of 40+
researchers under them. I felt like it was totally fair to include them in the
authors list, given that the research wouldn't have been possible without
their efforts.

I doubt the parent academic had O(1000) _first_ author publications, nor is
that mentioned in the article.

~~~
bjourne
No it isn't! That's like including your mom in an author's list because
without her raising you, you wouldn't be a researcher.

~~~
wolco
If your mom is directly raising funds to support 40 paid position she should
be included.

~~~
jononor
In the acknowledgements, not as an author. Especially not as primary author.

------
lisper
Holy cow! 1700 publications is one publication every week for 32 years! How
could anyone see that number and not realize that it must be a scam?

~~~
semajian
I've never seen a number that high in my field, maybe 500 or 600 for the big
shots, and even that seems wildly unreasonable to me. Feynman had about 85
peer-reviewed papers over his lifetime, which is less than two per year during
his career, which is a very reasonable number.

Authorship and citations are basically academic currency, and as such have
undergone inflation just like real currency. Professors get their name on a
paper as a form of payment for future or past services to other professors
(often related to funding). There's just no incentive for anyone to stop this
as far as I can't tell, and appealing to ethical conduct in authorship and
citations is not enough.

~~~
nonbel
>"Authorship and citations are basically academic currency, and as such have
undergone inflation just like real currency."

Currencies don't just undergo inflation for no reason. Central banks create
that situation on purpose to discourage saving.

~~~
fjsolwmv
But the method they use is printing money to move the value of money from
those who had it to toward those who get the newly printed money. Same as with
academics publications who do that decentralized.

(Another, more-natural (Austrian), cause of inflation is a contracting economy
so dollars chase fewer goods.)

~~~
nonbel
If publication of a new article is lowering the value of previous articles
that indicates a severe dysfunction (which may indeed by the case).

If understanding/knowledge is being accumulated by a field the new
publications should make those that they cite even _more_ valuable, not less
like in the case of a fungible currency.

------
sn41
So, mix-and-match existing papers? As a referee, I often do check whether the
authors have published similar content before. The only way to beat this
simple vetting is if someone pushes the same paper to multiple venues, and
they all get accepted at around the same time. This is pretty unlikely.

Incentives for reviewing are broken. Currently researchers have to see it as
an intrinsically motivated activity for the good of the field, there are no
quality checks per se.

~~~
dagw
More and more journals are using plagiarism detection software to test for
that. My wife recently got caught 'plagiarizing' herself after she copied a
paragraph of background material from a previous paper. The journal refused to
send her paper out for review until she rewrote the offending section.

~~~
blattimwind
Just self-cite?

------
potbelly83
I wonder in this day and age if a mind like John Nash would have thrived. Nash
had only a handful of publications, but all of them were masterpieces. Same
with Riemann.

~~~
fjsolwmv
They likely would get picked by super elite research orgs like IAS, or work
outside the system and get grants from people like Simons.

Or have a low level teaching job like that guy who proved a twin primes result
a few years aago.

------
mirimir
Now that's impressive! 1700 peer-reviewed papers. Damn.

A commenter asks why reviewers don't use plagiarism-detection software. With
most everything online (and Sci-Hub) it'd be pretty easy.

~~~
dagw
_A commenter asks why reviewers don 't use plagiarism-detection software._

Many journals do these days, and won't send a paper for review unless it
passes

------
onhn
thankfully arxiv.org flags articles with overlapping text:

[https://arxiv.org/help/overlap](https://arxiv.org/help/overlap)

~~~
improbable22
And the great thing about this system is that it does not inform you
beforehand. You cannot check whether your paper will pass, you submit, and
then when v1 appears online, it is marked as "having substantial text overlap
with works by other authors" etc, with links. You cannot remove this version.

It will also flag overlap with your own works, but obviously it's not exactly
a huge stain on your reputation if your thesis seems to borrow some paragraphs
from your own papers.

------
nine_k
As a software engineer, I spend comparable amounts of time writing my own code
an reviewing other people's code. I think it's normal.

Unlike scientists in academia, I'm explicitly (and well) paid to do both; it's
in my job description. If do a sloppy job reviewing, and let a bug through, I
feel the heat because something breaks. And what I work on is likely much
simpler than what scientists work on.

------
__bjoernd
I don't think this professor is exceptional, but rather the norm. As a PhD
student in computer science I regularly observed people publishing papers
overlapped to 80% with their previous paper. The difference being that paper 1
was titled "Solving problem X using Y" whereas the next one would be "A Y'
approach to solving the X problem"

~~~
SubiculumCode
Overlapping in topic, that is natural. Exact text, exact framing. No.

------
SubiculumCode
I can tell you with certainty that this is not the norm in psychology
research. You will not publish empirical articles in such a manner for any
journal anyone cares about. Chapters are time consuming affairs that usually
the result of an invitation from a reputable figure in the field. They will
not accept a copy paste of another chapter. Usually, the editor will have
requests to tailor your chapter under some theme. This Cornell professor is
most likely publishing in no-name journals and books that are pay to
play....just like the 1,001 spam messages in my inbox offer to me as an active
researcher. That Cornell has allowed this man to continue is beyond me, but it
would not be tolerate in my department. I admit I despise these articles. They
take the odd case and write a story, but leaves the public thinking this is
typical in science. IT IS NOT.

------
kelvin0
There are rules and incentives in the 'Academia' game. Among important things:
get published, get cited, get your name out there. I am cynically inclined to
think the 'system' fosters this, and some people have gotten good at it.

Unfortunately it's become less about 'science and knowledge' and more about
getting the grants to survive and eventually be able to work on 'real'
research.

However I do not believe this is specific to Academia, any 'game' can be
hacked (at least for some time).

~~~
mathattack
Considering his background of Yale, Stanford and 13 honorary PhDs spanning 4
continents (his words) I wonder how far back his gamesmanship goes.

------
jessaustin
ISTM perhaps the link should be to the blog post [0] from which TFA liberally
quotes? At the very least, the references to different colors of text will
make more sense.

Perhaps there's something "ironic" about that...

[0] [http://steamtraen.blogspot.com/2018/04/some-instances-of-
app...](http://steamtraen.blogspot.com/2018/04/some-instances-of-apparent-
duplicate.html)

------
sytelus
TLDR; a professor kept recycling his books in to perhaps 1000s of papers with
identical content. It is however not clear from the article how all these went
past peer reviewed publications for decades.

~~~
coldtea
Peer reviewed publications would just care the paper on review is any good,
they're not archivists to go and search for similarities. In fact in most
cases they barely read the work with any substantial sense of "read".

But the article mentions peers might not be getting the works either: " Bobbie
Spellman, former editor of the journal Perspectives on Psychological Science,
is confident “beyond a reasonable doubt” that Sternberg was not telling the
truth when he said that “all papers in Perspectives go out for peer review,
including his own introductions and discussions.” Unless, as Spellman puts it,
“you believe that ‘peer review’ means asking some folks to read it and then
deciding whether or not to take their advice before you approve publication of
it.”"

~~~
yiyus
> In fact in most cases they barely read the work with any substantial sense
> of "read".

I would not say this applies to most cases.

I have got many papers reviewed, have done many reviews, and have got
questions from people doing other reviews, and people usually put quite some
effort into it. There are exceptions, but bad reviews are not the norm, and
you can usually catch them when it happens and talk with the editor about it
(either if you are the author or another reviewer).

Searching for similarities should indeed be part of the job, but it is not
always easy, and I agree more work should be done in that aspect.

This may be different in other fields or specific journals but, please, don't
generalize.

------
headShrinker
It's seems really pertinent to point out two other profs doing somewhat the
same thing. Peter Boghossian & James Lindsay recently did an interview with
Joe Rogan.
[https://www.youtube.com/watch?v=OlqU_JMTzd4](https://www.youtube.com/watch?v=OlqU_JMTzd4)

The interview is really interesting.

------
apo
Academics are highly incentivized to seek quantity over quality of
publications. Reviewers are disincentivized from performing thorough peer
review (unpaid, tangential relation of paper to area of interest, too many
papers to review).

This professor has simply optimized an unethical path through the broken
incentive system. He's not alone by a long shot.

------
erikb
And for each such professor that gets discovered there are a dozen still
hidden and a hundred covering each other, all of the hundred three doing the
same stuff, just slightly smarter.

I don't believe anymore that this stuff is a new phenomenon though. It's more
like a group of people trying to discredit university these days.

------
mathattack
From the professor’s website:

His main research interests are in intelligence, creativity, wisdom, thinking
styles, teaching and learning, love, _jealousy, envy, and hate._

------
starchild_3001
Hey, the dude published 1700 papers. Even if his repetition factor is 3 or 4,
that's 400+ unique publications. Let's show some respect ;) Also, in some
fields, it's acceptable to publish the same material thrice: i) conference;
ii) journal; iii) book chapter, each slightly evolved or expanded or condensed
or improved version of one another (ahem, I'm looking at telecommunications
and signal processing).

------
xchip
TL;DR: The man is obsessed with citing his own work—except on the occasions
when he does a cut-and-paste job, in which case he is suddenly shy about
mentioning his other publications. And, as editor, he reportedly says he sends
out everything for peer review, but then doesn’t.

------
shanghaiaway
He's merely applying SEO practices. Don't hate the player, hate the game.

~~~
dev_tty01
I don't understand this. The professor is acting in an unethical manner. Do
you mean to say that if something is possible, it is intrinsically ethical?
Perhaps I am misinterpreting your comment. He is responsible for his own
actions and those actions are clearly outside the range of acceptable ethical
behavior for an honorable participant in a research community. Yes, the 'game'
also has issues, but that does not excuse the actions and choices of the
'player.'

------
linkmotif
How can I hire this guy??

------
ancorevard
Social sciences is an oxymoron. Problem with social sciences is not only this
issue that the article points out, but that the vast majority of it is not
reproduceable.

~~~
ocfnash
My rule of thumb is that if a subject area includes the word "science" in its
title, it is less likely to follow the scientific method.

Physicists don't assert that they study "physics science", nor do chemists
describe their subject as "chemistry science".

It's only a rule of thumb of course. There are exceptions (computer science,
neuroscience, ...) but I think it works in a lot of cases: "social science",
"management science", "data science", ...

It's a bit like how if a country includes the word "democratic" in its title,
it's less likely to be a real democracy.

~~~
sideshowb
Computer science isn't an exception, it mostly mathematics

~~~
ocfnash
I tend to agree, I was thinking this as I wrote the above.

------
Kyragem
Google scholar should filter out self citations. Google scholar hasn’t been
updated in like 10 years I sometimes worry alphabet will pull it’s plug.

~~~
jmmcd
Yes, Google Scholar is so useful and ubiquitous that it is dangerous. It would
be far preferable to have a proper open replacement for it.

The technical issues are probably "easy". The ongoing maintenance
costs/hassles could be solved. But the human factors would be very tricky to
get right: anyone who maintained it would be under a lot of pressure in their
decision-making -- your example of whether self-citations should be filtered
is a good example of a decision which would have big effects and would be
highly political.

One good thing about Google Scholar is (was?) that we didn't worry that Google
engineers would tweak it to benefit Google researchers.

~~~
EthanHeilman
The biggest problem with replacing google scholar is that if you are not
google, publishers will threaten to sue you for crawling their website and/or
displaying information about their papers on your website. If you are Google
you can just counter-threaten to delist them and stop crawling their website
entirely. Google is unfortunately one of the only players that can actually
offer something like google scholar.

------
ArrayList
Academic journals hate him!

------
academiasucks
There are PhD students at my school who get tasked with peer review and they
hardly speak a lick of English. They don't otherwise seem that bright
either...

