
Elsevier investigates hundreds of peer reviewers for manipulating citations - DanBC
https://www.nature.com/articles/d41586-019-02639-9
======
xhkkffbf
This seems kind of bogus to me. Yeah, I can see how someone could construe it
as such, but I think there's a perfectly legitimate reason to mention possible
citations in the review. If I get a paper that covers something I've written
about in the past, I feel perfectly justified saying, "Gosh, this has been
published before. Perhaps you should cite this and explain how your work is
new."

I do it for other papers and I don't see why it would be wrong to do it for my
own. Oh, I can understand how someone can twist it sound nefarious, but that
doesn't seem the best solution to me.

More citations to prior work is a good thing. More discussion about what is
truly novel is a good thing.

And let's be honest, my salary won't change one iota if I get another few
citations. The idea that this is somehow selfish is really bogus. Citations
aren't worth that much at all.

~~~
diffeomorphism
> And let's be honest, my salary won't change one iota if I get another few
> citations. The idea that this is somehow selfish is really bogus. Citations
> aren't worth that much at all.

IF you already have a permanent position and many papers. While you are still
a young researcher with fewer publications, hiring committees definitely are
also interested in whether your papers are highly cited (see also the whole
notion of "impact factor" of a journal).

~~~
itcrowd
Yes, this is correct.

Additionally, massive cash incentives are being handed out in (at least) China
[1]. This reward is given based on the impact factor of the journal, which
means that excessive citation of your papers can artificially inflate the IF
of the journal and lead to higher payout for your next article(s). It's a bit
of a stretch, but it could have an impact if it is done on a massive scale.

[1] [https://www.technologyreview.com/s/608266/the-truth-about-
ch...](https://www.technologyreview.com/s/608266/the-truth-about-chinas-cash-
for-publication-policy/)

------
einpoklum
I don't like Elsevier, but I always wondered:

Why aren't there a few meta-reviewers, to review the reviews?

That is, why doesn't the PC chair, or some key reviewers, have a look at the
reviews given on papers to make sure that:

\- They were respectful \- They were to the point \- They did not indicate
obvious lack of knowledge of the relevant subject matter (this one might be a
bit tricky) \- They were not self-promoting

That would really have helped a lot of conferences...

(but I actually know the answer: There's a dearth of reviewers as-is, so
there's de-motivation to introduce this meta-review mechanism even if it would
help review quality.)

~~~
Majromax
> Why aren't there a few meta-reviewers, to review the reviews?

In principle, the associate editor managing the submission (and making the
ultimate accept/reject decision) should be doing this for every paper. In
practice, the workload is such that editors have an incentive to make the
review process as automatic as possible on their end.

~~~
einpoklum
So in theory, a highly-trusted reviewer should avoid regular reviews and be
engaged in meta-reviews, because of that load.

------
einpoklum
Unrelated to anything else:

Please try avoiding work with Elsevier journals, and instead prefer open-
access not-for-profit ones.

Did you know Elsevier had a 37% operating profit margin last year?

~~~
denton-scratch
Elsevier has an open-access no-cost publishing option [1].

I understand it's considered inferior to the paywalled version by reviewers,
and that Elsevier itself regards it rather like a loss-leader. So indirectly,
I suppose Elsevier profits from it. And I don't disagree with the exhortation
to avoid Elsevier - it's better for all of us if research (and especially
publicly-funded research) can be read by anyone, without having to pay a
gatekeeper.

[1] [https://www.elsevier.com/en-gb/about/open-science/open-
acces...](https://www.elsevier.com/en-gb/about/open-science/open-access)

------
itcrowd
Here's a possible solution to this problem. I'm still workshopping the idea,
so feedback is welcome.

The review process must become more transparent. The reviewers and their
assessment of the work must become public when the paper is accepted and
published. This in itself will prevent some nefarious citation-begging.

In addition, some guidelines could be mandated concerning citations to be
added or removed. For example, for each reference you want the author to
include, you must explain why it should be included. Not just "this has been
done before" but "the work you describe is similar to the work done on p.4 of
[new ref.] but differs in X,Y and Z and is still novel because of
invention/method A,B,C. Therefore, you build on their work and should cite
them."

To the others in this thread: remember that peer review is often an iterative
process and the authors and reviewers have multiple rounds of review in which
the authors can choose not to implement some suggested changes, such as adding
citations. This is often a small debate. The editor of the journal has the
final say and can overrule the decisions of the reviewers (publish / reject)
if there are good grounds to so.

Also a small note: the reviewers are selected by the journal editors based on
expertise. That means, especially for smaller fields, that many prominent
papers are written by the reviewers simply because they are the experts. It is
easy to conclude that this leads to bad behavior but it is often the case that
there simply are no other papers available with such specific details. Of
course, this is not an excuse for malicious actors which are also rampant.

~~~
diffeomorphism
The reviewer is known to the editor, so you already have some accountability
there. The reviewer should definitely NOT be known publicly for the same
reason that reference letters are usually confidential. You are going to get
sugarcoated, bland, self-censored reviews otherwise.

> "the work you describe is similar to the work done on p.4 of [new ref.] but
> differs in X,Y and Z and is still novel because of invention/method A,B,C.
> Therefore, you build on their work and should cite them."

That is your job! As a referee I will point out that:

\- This is similar to paper X, which you did not cite. You should give credit
here.

\- You, not me, should argue why your work is new/better etc. and hence worth
publishing. If you just write "we do the exact same stuff as in X", then I
will assume this is not novel and not that you are lying and it is actually
novel and very different from X.

~~~
itcrowd
Points taken. Maybe the reviewers should be anonymous, but I still believe
that making the reviews public after publication is a good thing (anonymized).

> The reviewer is known to the editor, so you already have some accountability
> there.

Yes. I think it is not enough though.

> That is your job!

Well, yes it is. But, assuming the author is acting in good faith, it is still
possible to miss some references. You can kindly point out the references and
ask for clarification as to why the material isn't cited. I may have gone too
far by handing the arguments to the authors.

------
neilv
This seems a positive move for science process, as well as for Elsevier's
reputation and the value it can add as a publication venue.

~~~
semi-extrinsic
I think undoubtedly the reason for "investigating" this now is the optics for
Elsevier. They are bad guys in the eyes of most people right now.

The main evidence I have for this claim, is that coercive citations are so
common as to be a trope in PhD comics and the like. Moreover it is bleeping
obvious to the editor, when they receive a review by M. Ister that suggests
10+ new references to M. Ister et al., that this is going on.

~~~
bonoboTP
It also de facto deanonymizes the reviewer.

------
ganitarashid
That may be the least of their problems right now

------
eyegor
It's good that they're investigating the "insider dealing" varient of citation
boosting but I haven't seen anything around excess self or chain-citing, which
I always viewed as a more common offense. Obviously follow-up papers are a
good thing, but I've known several authors that aim for long chains of follow-
up papers (or summarizing several chains) purely for boosting citation
numbers. The idea being that if it's a hot topic their paper will be caught in
the crossfire regardless of whether it truly merits a citation.

------
jshowa3
Elsevier has always smelled bad to me.

They published the Andrew Wakefield autism paper for crying out loud.

~~~
denton-scratch
Well, The Lancet published it - The Lancet is probably the #1 medical
scholarly journal bar none. True - The Lancet is owned by Elsevier.

Ten years later, the General Medical Council found that Wakefield's research
had been dishonest, and on that basis The Lancet retracted the article. But I
don't think it's right to criticise Elsevier over the publication of a paper
that wasn't conclusively falsified for another decade.

Reviewers are not supposed to be detectives; it's not their job to expose
deliberate fraud. They rely on the material submitted by the author. This is
the much same as for auditors; they rely on the documents supplied by the
auditee's accounting team, they're not in the business of investigating
criminality and blatant lying.

Elsevier smells no worse to me than any other commercial publisher of
scholarly journals. To be clear, I think they all have a pretty dodgy whiff
about them; but I'm not aware that Elsevier is any worse than the others.

~~~
jshowa3
Most papers I've heard of that exhibit malfeasance come from Elsevier and it's
my understanding that the journal editors manage the whole peer review process
and are compensated by the publisher through some form of an agreement.

And yes, it is the job of reviewers to at least recommend not publishing a
paper that can be detected as fraudulent.

------
kayaeb
I'm actually surprised the number is so low.

With high profile scientists, in specific fields there are often 2 camps
pushing 2 different competing solutions to the same problem, and each solution
tree has a set of "canonical" references which are cited for most papers in
that tree. A set of these is generally requested to be discussed by the
opposing team (i.e. why does your paper support your tree and what have you
found that seemingly invalidates the other tree?) The more prolific authors in
a subfield are generally chosen as reviewers, and also hold the most recent
papers in that tree, so requesting citations to recent relevant literature
often goes hand in hand with work they've had input on.

Add this to the fact that, even in blind reviews, the reviewer can guess the
author's name correctly more than half the time, and the reverse is
anecdotally true (this is not as astounding as it seems, in my specific
subfield there are maybe 20 people putting out a paper a year -- there was a
paper I reviewed where I did not suggest my own work for citation, but the
author retroactively added a citation for me, leading me to believe they
guessed my identity), and I'm surprised less than 1% "consistently" are cited
in papers they review.

If anything, this meta-analysis is heartening, as I would expect this problem
to be much more prolific in the "publish-or-perish" (get cited or lose your
job) paradigm. Elsevier, in my field, has high impact journals though, and I
know the problem to be much more rampant in the lower level journals.

As a slight aside, some countries simultaneously promote local researchers to
publish in their own local journals and turn around to note that publications
by their scientists have tremendous citation numbers, I personally question
the validity of those citation numbers since they seem to be politically
incentivized. Within the fields, real researchers will have a feel for what
journals are pushing good science and which are "citation padding" with
incomplete and hasty work just "published" for numerics.

In response to the meta-review comments. All review is volunteer work, _it
should be this way,_ but I personally find it obnoxious when I'm on public
wifi and can't even load a paper I've reviewed because some ostentatiously
profitable publisher has paywalled it. The research is publicly funded, the
researcher pays 100$ per page of publishing cost, the review is volunteer
work. Why exactly do you need 30$ for me to read the methods section?

~~~
evandijk70
I agree with most of your comment, but why do you believe review should be
volunteer work? In lots of non-academic fields you have professionals
evaluating the quality of products, and being paid to do so.

------
throwawayjava
"grifters complain about free labor"

------
nathanaldensr
Everyone is being told to put faith into "science" as somehow incorruptible,
only to have articles like this written. We are supposed to be able to rely on
the people writing papers being ethical, when it seems the reality is a race
to the bottom like everywhere else in society.

~~~
bonoboTP
I agree that the general public has basically no idea how the academic world
operates and surely their trust in it would decrease if they knew. There are
many sketchy practices and a lot is based on an "honor" system, which often
devolves to back scratching. Lots of inflated claims for sexy publicity, mass
produced PhDs etc.

People imagine science as this serious rigorous thing where everything is
objective and unbiased careful "monks" investigate all the details with
scrutiny. When actually it's often like a bazaar. You have to sway the
reviewer with overinflated claims and "sell yourself" in a way that fully
contradicts the scientific virtue of self-skepticism (finding reasons why
you're _wrong_ , not right). There's also a lot of who knows/likes who and
subjective human factors. These are all obvious for people after a few months
or 1-2 years as a researcher, but science has a very different image for
laypeople who mostly know about it from school, mostly results that were
produced in a different era. The number of scientists has exploded recently,
there are tons and tons of papers written and they are all supposed to be some
novel discovery and this great leap forward, which is clearly unrealistic at
this scale. At the same time thorough analysis and skeptical papers are
"boo"-ed down, seen as confrontational and are undervalued compared to sexy
new claims.

Many institutions have become paper mills. Raw metrics like paper counts and
citation counts have replaced careful consideration in many evaluations, it's
basically an industrial scale mechanized process. Feels like a house of cards
that can not be kept up indefinitely. The reputation of science will be hurt
for sure as the public starts to find out more.

~~~
conjectures
On the other hand, to list some random newish things:

* Genome sequencing.

* New malaria drugs.

* Graphene.

* Better batteries.

* Deep learning.

* fMRI.

* CRISPR.

From the outside, the public doesn't care about the trials and tribulations of
PhDs. Or irrelevant puffed up papers. Or the organisational politics of
research.

~~~
bonoboTP
That's true, there is a lot of good work out there of course.

And I'd say the system is still of concern for the public. Few readers of pop
science articles recognize when they are essentially reading a pr marketing
piece, and not something like a consensus textbook-like description of
consolidated knowledge. The authors have special interests and biases beyond
uncovering the truth, such as pressure for publication counts, tenure
possibilities, filling your CV, all sorts of political considerations, flag
planting, etc.

Another thing is how these discoveries are made. I think people imagine it as
if researchers research and ponder about things, then when they find
something, they write it up to share it with the world and contribute their
new found interesting knowledge. I know someone who did that in his PhD, he
quit after several years without any paper submissions. When asked, he said he
hasn't found anything yet worth publishing. In most places, people decide in
advance that they're going to have a paper this year for this
journal/conference and then they plan a solid way there. You decide what
result would be sexy (keeping some secondary plans in place) and then do the
whole thing. Several times things will not go as you expect, but the paper
must be written. Then you rewrite history and come up with ridiculous
"storylines" to make it seem like all your hypotheses naturally flow from
prior work and everything fits in this nice story. Then you torture the data
until it sings because there must be a paper. You look for the smallest
indication of something interesting, inflate it to sound like a breakthrough,
speculate about far reaching conclusions etc. And then if your university has
a good pr department, journalists will eat it all up and you'll be out in the
media, impacting normal people's understanding as well.

(I realize I may be a bit too cynical here, and I exaggerate somewhat to get
the message through.)

~~~
conjectures
I get that, and you may not be wrong. It's that you may be right about the
wrong thing. All this stuff matters a lot if you're pursuing a career in
academia. But from the outside it just means the machine of science is a bit
less efficient than it could be.

Also I put it to you that serious researchers are rarely misled by low
quality, vacuous papers. That kind of work will be ignored by history.

So I agree that what you're pointing out is something of a problem, but may
not constitute a cataclysm.

------
buboard
... and then they 'll be after networks of ~~bots~~ reviewers , and then
networks of networks of bots, and then state actors etc etc. Scientific
publishing quickly catching up to twitter and blackhat SEO

