I do it for other papers and I don't see why it would be wrong to do it for my own. Oh, I can understand how someone can twist it sound nefarious, but that doesn't seem the best solution to me.
More citations to prior work is a good thing. More discussion about what is truly novel is a good thing.
And let's be honest, my salary won't change one iota if I get another few citations. The idea that this is somehow selfish is really bogus. Citations aren't worth that much at all.
If referees and authors are semi-randomly matched, there's no reason a small number of referees should consistently get to review papers that have incorrectly omitted references to their own papers.
I'm not sure if this is true. Citations are the primary currency in academia. It's not about money; it's about prestige.
>More citations to prior work is a good thing. More discussion about what is truly novel is a good thing.
Again I'm not so sure about this. The problem is that most papers are not novel at all. There are some seminal papers in each subject that everyone cites. There are also papers that are closely related to what someone is doing. Everything else is usually a distraction.
IF you already have a permanent position and many papers. While you are still a young researcher with fewer publications, hiring committees definitely are also interested in whether your papers are highly cited (see also the whole notion of "impact factor" of a journal).
Additionally, massive cash incentives are being handed out in (at least) China . This reward is given based on the impact factor of the journal, which means that excessive citation of your papers can artificially inflate the IF of the journal and lead to higher payout for your next article(s). It's a bit of a stretch, but it could have an impact if it is done on a massive scale.
Why aren't there a few meta-reviewers, to review the reviews?
That is, why doesn't the PC chair, or some key reviewers, have a look at the reviews given on papers to make sure that:
- They were respectful
- They were to the point
- They did not indicate obvious lack of knowledge of the relevant subject matter (this one might be a bit tricky)
- They were not self-promoting
That would really have helped a lot of conferences...
(but I actually know the answer: There's a dearth of reviewers as-is, so there's de-motivation to introduce this meta-review mechanism even if it would help review quality.)
In principle, the associate editor managing the submission (and making the ultimate accept/reject decision) should be doing this for every paper. In practice, the workload is such that editors have an incentive to make the review process as automatic as possible on their end.
Please try avoiding work with Elsevier journals, and instead prefer open-access not-for-profit ones.
Did you know Elsevier had a 37% operating profit margin last year?
I understand it's considered inferior to the paywalled version by reviewers, and that Elsevier itself regards it rather like a loss-leader. So indirectly, I suppose Elsevier profits from it. And I don't disagree with the exhortation to avoid Elsevier - it's better for all of us if research (and especially publicly-funded research) can be read by anyone, without having to pay a gatekeeper.
The review process must become more transparent. The reviewers and their assessment of the work must become public when the paper is accepted and published. This in itself will prevent some nefarious citation-begging.
In addition, some guidelines could be mandated concerning citations to be added or removed. For example, for each reference you want the author to include, you must explain why it should be included. Not just "this has been done before" but "the work you describe is similar to the work done on p.4 of [new ref.] but differs in X,Y and Z and is still novel because of invention/method A,B,C. Therefore, you build on their work and should cite them."
To the others in this thread: remember that peer review is often an iterative process and the authors and reviewers have multiple rounds of review in which the authors can choose not to implement some suggested changes, such as adding citations. This is often a small debate. The editor of the journal has the final say and can overrule the decisions of the reviewers (publish / reject) if there are good grounds to so.
Also a small note: the reviewers are selected by the journal editors based on expertise. That means, especially for smaller fields, that many prominent papers are written by the reviewers simply because they are the experts. It is easy to conclude that this leads to bad behavior but it is often the case that there simply are no other papers available with such specific details. Of course, this is not an excuse for malicious actors which are also rampant.
I disagree on one point specifically though. Reviewers must remain anonymous. What if I harshly review / reject a crap work from the department head of some group at Harvard? then, in anger, I get blackballed by s/he and their friends, which could be the whole field in some cases?
> "the work you describe is similar to the work done on p.4 of [new ref.] but differs in X,Y and Z and is still novel because of invention/method A,B,C. Therefore, you build on their work and should cite them."
That is your job! As a referee I will point out that:
- This is similar to paper X, which you did not cite. You should give credit here.
- You, not me, should argue why your work is new/better etc. and hence worth publishing. If you just write "we do the exact same stuff as in X", then I will assume this is not novel and not that you are lying and it is actually novel and very different from X.
> The reviewer is known to the editor, so you already have some accountability there.
Yes. I think it is not enough though.
> That is your job!
Well, yes it is. But, assuming the author is acting in good faith, it is still possible to miss some references. You can kindly point out the references and ask for clarification as to why the material isn't cited. I may have gone too far by handing the arguments to the authors.
The main evidence I have for this claim, is that coercive citations are so common as to be a trope in PhD comics and the like. Moreover it is bleeping obvious to the editor, when they receive a review by M. Ister that suggests 10+ new references to M. Ister et al., that this is going on.
They published the Andrew Wakefield autism paper for crying out loud.
Ten years later, the General Medical Council found that Wakefield's research had been dishonest, and on that basis The Lancet retracted the article. But I don't think it's right to criticise Elsevier over the publication of a paper that wasn't conclusively falsified for another decade.
Reviewers are not supposed to be detectives; it's not their job to expose deliberate fraud. They rely on the material submitted by the author. This is the much same as for auditors; they rely on the documents supplied by the auditee's accounting team, they're not in the business of investigating criminality and blatant lying.
Elsevier smells no worse to me than any other commercial publisher of scholarly journals. To be clear, I think they all have a pretty dodgy whiff about them; but I'm not aware that Elsevier is any worse than the others.
And yes, it is the job of reviewers to at least recommend not publishing a paper that can be detected as fraudulent.
With high profile scientists, in specific fields there are often 2 camps pushing 2 different competing solutions to the same problem, and each solution tree has a set of "canonical" references which are cited for most papers in that tree. A set of these is generally requested to be discussed by the opposing team (i.e. why does your paper support your tree and what have you found that seemingly invalidates the other tree?) The more prolific authors in a subfield are generally chosen as reviewers, and also hold the most recent papers in that tree, so requesting citations to recent relevant literature often goes hand in hand with work they've had input on.
Add this to the fact that, even in blind reviews, the reviewer can guess the author's name correctly more than half the time, and the reverse is anecdotally true (this is not as astounding as it seems, in my specific subfield there are maybe 20 people putting out a paper a year -- there was a paper I reviewed where I did not suggest my own work for citation, but the author retroactively added a citation for me, leading me to believe they guessed my identity), and I'm surprised less than 1% "consistently" are cited in papers they review.
If anything, this meta-analysis is heartening, as I would expect this problem to be much more prolific in the "publish-or-perish" (get cited or lose your job) paradigm. Elsevier, in my field, has high impact journals though, and I know the problem to be much more rampant in the lower level journals.
As a slight aside, some countries simultaneously promote local researchers to publish in their own local journals and turn around to note that publications by their scientists have tremendous citation numbers, I personally question the validity of those citation numbers since they seem to be politically incentivized. Within the fields, real researchers will have a feel for what journals are pushing good science and which are "citation padding" with incomplete and hasty work just "published" for numerics.
In response to the meta-review comments. All review is volunteer work, it should be this way, but I personally find it obnoxious when I'm on public wifi and can't even load a paper I've reviewed because some ostentatiously profitable publisher has paywalled it. The research is publicly funded, the researcher pays 100$ per page of publishing cost, the review is volunteer work. Why exactly do you need 30$ for me to read the methods section?
I've had people play this game on several of my papers under review. Reviews will say something like "high quality submission but the authors should consider including work X for completeness" etc where it's obvious that work X was written by the reviewer. As an author, I just found some place to plausibly cite the work and moved on. The quality or validity of my manuscript was not impacted in any way. In my admittedly limited experience, reviewers will rarely green light obviously inferior work just to get an additional citation as it impacts their reputation and it's an inefficient way to stat pad (self-citation or trading citations with your collaborators is by far the most efficient way to pump citation count, especially in the context of review papers). Additionally, obviously inferior works don't even make it to the peer review process; they are rejected by the journal editors in a first pass screen.
There are more insidious games people play that DO impact publication quality, such as requesting friendly reviewers (this mechanism is normally legitimately used to help editors route manuscripts for review to domain experts), submitting work to journals where the PI is an editor / has relationships with editors, attaching well known PIs to manuscripts to add the appearance of legitimacy even though they did no work and were not involved at all, etc. These are problems that won't go away because there are far too many researchers and not enough money / positions to go around, so people are heavily incentivized to play games.
People imagine science as this serious rigorous thing where everything is objective and unbiased careful "monks" investigate all the details with scrutiny. When actually it's often like a bazaar. You have to sway the reviewer with overinflated claims and "sell yourself" in a way that fully contradicts the scientific virtue of self-skepticism (finding reasons why you're wrong, not right). There's also a lot of who knows/likes who and subjective human factors. These are all obvious for people after a few months or 1-2 years as a researcher, but science has a very different image for laypeople who mostly know about it from school, mostly results that were produced in a different era. The number of scientists has exploded recently, there are tons and tons of papers written and they are all supposed to be some novel discovery and this great leap forward, which is clearly unrealistic at this scale. At the same time thorough analysis and skeptical papers are "boo"-ed down, seen as confrontational and are undervalued compared to sexy new claims.
Many institutions have become paper mills. Raw metrics like paper counts and citation counts have replaced careful consideration in many evaluations, it's basically an industrial scale mechanized process. Feels like a house of cards that can not be kept up indefinitely. The reputation of science will be hurt for sure as the public starts to find out more.
I share your conclusion but hope that science itself prevails, that this state we're in now is part of the "self-cleansing" property of science and that specific scientists will be held accountable instead of the concept of science. But I can only hope for now.
* Genome sequencing.
* New malaria drugs.
* Better batteries.
* Deep learning.
From the outside, the public doesn't care about the trials and tribulations of PhDs. Or irrelevant puffed up papers. Or the organisational politics of research.
And I'd say the system is still of concern for the public. Few readers of pop science articles recognize when they are essentially reading a pr marketing piece, and not something like a consensus textbook-like description of consolidated knowledge. The authors have special interests and biases beyond uncovering the truth, such as pressure for publication counts, tenure possibilities, filling your CV, all sorts of political considerations, flag planting, etc.
Another thing is how these discoveries are made. I think people imagine it as if researchers research and ponder about things, then when they find something, they write it up to share it with the world and contribute their new found interesting knowledge. I know someone who did that in his PhD, he quit after several years without any paper submissions. When asked, he said he hasn't found anything yet worth publishing. In most places, people decide in advance that they're going to have a paper this year for this journal/conference and then they plan a solid way there. You decide what result would be sexy (keeping some secondary plans in place) and then do the whole thing. Several times things will not go as you expect, but the paper must be written. Then you rewrite history and come up with ridiculous "storylines" to make it seem like all your hypotheses naturally flow from prior work and everything fits in this nice story. Then you torture the data until it sings because there must be a paper. You look for the smallest indication of something interesting, inflate it to sound like a breakthrough, speculate about far reaching conclusions etc. And then if your university has a good pr department, journalists will eat it all up and you'll be out in the media, impacting normal people's understanding as well.
(I realize I may be a bit too cynical here, and I exaggerate somewhat to get the message through.)
Also I put it to you that serious researchers are rarely misled by low quality, vacuous papers. That kind of work will be ignored by history.
So I agree that what you're pointing out is something of a problem, but may not constitute a cataclysm.
A more open peer-review system would help this issue and expose abusers like this, but it's not clear (to me) how such a system could work... At least this issue does not directly affect the quality of research, but rather corrupt the already noisy quality metric of citation count.
It is a bit different than design docs/code review in the sense that, there are N different authors competing to grab a place in conference or journal -- so they may try to game the system, while in case of design docs/codereview there is no such competition.
Science doesn't work by noble individuals scrupulously following rules that they could ignore if they chose to.
Where science works, it works by incentivising one scientist to tear apart the next scientist's position. There's no better way to avoid having your argument torn apart than for it to be genuinely solid.
This is aptly demonstrated by Nature tearing into Elsevier...
The time to get really worried is when you stop hearing anything like this and get a message only of, 'Everything is fine, nothing to see here'.
This is very true. In my graduate education that came out loud and clear - your job is to try and punch holes in other people’s work. If you find a hole, it’s their job to fill it.
That’s what good science looks like.
Whatever you're proposing as the alternative to science has the same problem: it's implemented by imperfect humans.
What's your alternative?
You're taking that a tad too far. Remember, for long periods of time, certain disciplines were considered to be "science" which weren't quite that. It's not just individual personal failings.
How do you know turning lead into gold isn't possible? Science. How do you know ether and phlogiston aren't real? Science. How do you know it's not aberrant for women to have large clitori? Well "aberrant" is subjective, so that's really not even a scientific claim, so science certainly can't be blamed for that. How do you know it's not medically beneficial to cut off males' foreskins? Science.
You can't accuse science of screwing up those beliefs: science is the only reason you know those beliefs are wrong. These are examples of science working.
This subthread started with me saying "Science is incorruptible. Scientists are not. Don't conflate the imperfections of the humans implementing science as being problems with science." If anything, your examples prove my point. What you've shown is that scientists have believed some wildly inaccurate things in the past, and science has corrected them.
The point is that even science isn't well-defined and we need to be super flexible.
Huh? No it doesn't. This is complete nonsense.
The scientific method would deal with this by hypothesizing that A combined with B might do something, then testing the hypothesis, and discovering that it does. Generating that hypothesis might be difficult--nobody said science was easy. But there's nothing about the scientific method that can't handle the situation you propose.
There are hundreds of examples of this. Try, for example, hydrochloric acid and baking soda as your A and B--I've never mixed these two and don't know what they produce, but based on my knowledge of acids and baking soda, I hypothesize they will result in CO2, even though neither of these decomposes into that gas by itself.