
“Close to 10% of the papers we receive show some sign of academic misconduct” - tokenadult
http://retractionwatch.wordpress.com/2013/09/19/editor-close-to-10-of-the-papers-we-receive-show-some-sign-of-academic-misconduct/
======
aroch
On a vaguely related note, I know of a crystallographer who would send out
Christmas cards every year that looked like postcards with a partial, yet to
be published, crystal structure as the image. One year, one of the
crystallographers he sent the card out to figured out which protein it was,
reconstructed the density map (partially from his own data on the same
molecule, partially from the solved structure on the card) and published the
first PDB for it to RCSB. The following tiff and yelling about academic
misconduct was pretty funny. They wound up taking down the PDB, but the guy's
never sent out another postcard-like Christmas card

~~~
craigyk
sounds like a good fairy tale. are you claiming he phased data he had already
using the model in the postcard? seems unlikely.

------
beloch
Self-plagarism is a tricky thing. Some redundancy in publications is virtually
unavoidable. If you are reporting new results from an experiment that has been
previously published, you still need to provide a brief summary of how things
work but, of course, you reference your previous work. This is as transparent
and innocent as things get, and isn't self-plagiarism. However, what if you
reproduce a figure published in another journal exactly? Then things can get
ugly (Depending on a lot of legalese no researcher should have to worry
about.)!

To make matters worse, the same experimental setup is frequently used to run
experiments that result in several publications. When you have several papers
authored by different overlapping groups of researchers in various stages of
review simultaneously things can get very confusing. The first paper you
submit is frequently the last one published. The last paper you submit will
then wind up referencing arxiv pre-prints instead of published articles, and
if you did plagiarize yourself by mistake, the publisher of the paper
submitted last might have absolutely no chance of detecting it. On top of it
all, there is frequently interference from editors and reviewers. A well-
written and coherent paper on a theory an experiment testing that theory might
be considered "too long" or "too confusing" by a referee, and the editor will
jump on that comment and demand the paper be split into two, sometimes only
one of which they feel like publishing in their journal. If the researcher had
done this of his/her own initiative many would consider it CV-padding!

Yes, self-plagiarism can be a means to pump up your publication list, but it
also happens innocently because publishing is such a confusing counter-
intuitive mess! It's probably an idiotic idea, but some kind of pan-journal
version control system, even just for figures, would be a tremendous headache
reducer if done right!

------
unreal37
Half the plagiarism violations are "self-plagiarism", which I assume means
copying from your own previously published works. Hardly seems like such an
ethical problem to copy from yourself except of course if you're paid by the
word.

~~~
aidenn0
It increases the amount of crap out there, plus it pads the CV

~~~
ChuckMcM
This is a huge issue. When you slog through 5 papers someone has written and
keep getting a deja vu feeling it really sucks.

~~~
sliverstorm
Not to mention when your deja vu is of deja vu.

------
pizza_boy
I've said as much in the article comments, but it's worth repeating here:
there's really no surprise that this sort of thing happens when more than a
million articles are published every year.

There is so much pressure on researchers to publish. I really do think the way
to solve this sort of problem is to find ways to give researchers credit for
other forms of contributions.

This is what figshare is doing with datasets, and what we're doing with peer-
review: [http://blog.publons.com/post/61380784056/announcing-doi-
supp...](http://blog.publons.com/post/61380784056/announcing-doi-support-for-
reviews)

~~~
chubot
For Computer Science Ph.D.'s, I think you should get credit for writing code.

Either production-quality open source code, or pedagogical code. I'm looking
at the Stanford Pintos kernel and MIT xv6 kernel. While there were minor
papers from those projects, I think they were more like a labor of love. When
you consider the coding effort, those projects probably took 10x the effort
than a typical paper.

But yeah it would be better if a little more time was spent on code vs.
papers.

I actually attended a talk from an Adobe researcher talking about software
abstractions some years ago. He advocated that you should be able to get a
Ph.D. for finding a good abstraction, e.g. for say modeling a paint brush or
something. There are lots of bad ways to write code but only a few good ones.
Even better would be to write it in a way so that other people can actually
learn from it.

~~~
foobarbazqux
I'm not sure you can really write production quality code in a Ph.D., because
that requires a production environment, which just isn't available in
academia, and also because writing such code conflicts with research goals.

~~~
chubot
Sure you can. Plenty of 16 year olds have written say windows managers that
thousands of people use -- that's production quality code.

Your second argument is circular -- I'm saying a valid research goal should be
to write solid and useful code, and then explain it.

For example, you could write an OS kernel or kernel subsystem to fill some
particular part of the design space. A great but rare example is what the
authors of Lua have done.

~~~
foobarbazqux
All I mean is that fixing non-essential bugs, making your code pretty,
worrying about small performance regressions, writing thorough documentation,
polishing the user interface, porting your code to N platforms, dealing with
copyrights and patents and trademarks, ensuring good distribution of your code
either by selling it or making packages for major distros, and taking care of
your users generally doesn't help you to get papers published.

Academia wants new ideas. Production (industry) generally revolves around
making a new idea useful to a broad range of people, by which point it's an
old idea. You can take something written in academia and put it into
production, but if you do that in your Ph.D. it almost certainly isn't helping
you to finish.

~~~
chubot
You're still arguing circularly. I know that what you mention is exactly the
reason why academics don't write more code.

I'm saying it would be better for society if the academic culture emphasized
the craft of coding, rather than solely "new ideas". The whole point of this
article is that the emphasis on "new ideas" incentivizes fraud.

Academic culture changes faster than you think. I expect that the structural
changes caused by online courses will have a big effect in the near future.

~~~
foobarbazqux
In my experience, it's already at the point where a focus on trade and craft
actually hampers innovation in an academic context, because so much of the
funding comes from industry. Most papers today are about safe, incremental
improvements rather than bold new ideas, because there's no obvious money in
significant breaks from tradition. I think we need to let academia and
industry each do their job: academia can produce great, wild ideas and
discover fundamental truths, and industry can refine them into something
profitable.

------
Brakenshire
You can't enforce such a brutal publish or perish pressure on someone's
livelihood without selecting to some extent for people who will abuse the
system. I personally think the whole system is fundamentally flawed, given the
extent to which it relies upon trust. You're operating a very, very
competitive market, where the integrity of the 'product' is almost entirely
self-verified.

------
rurounijones
Kind of ironic that we have lawyers from elsevier talking about misconduct
when they have this [http://www.the-
scientist.com/?articles.view/articleNo/27383/...](http://www.the-
scientist.com/?articles.view/articleNo/27383/title/Elsevier-published-6-fake-
journals/) in their closet as well as sundry other things.

On the other hand, maybe they really are trying to improve themselves and I am
being too judgemental.

------
spikels
Important issue. It is good to see Elsevier, a publisher often thought to be
more concerned with profits, highlighting the problem.

~~~
Kliment
I don't see why that matters. This is in issue with absolutely every journal,
and Elsevier should die regardless. Ask the editors of any open access journal
what percentage of submissions show problems like this, you'll likely get the
same response. This in no way excuses or mitigates the evil that Elsevier has
done to the world.

------
WalterSear
And those are the ones who do a sloppy enough job to get caught.

------
denzil_correa
I can speak for Computer Science and the problem is "quid pro quo".
Irrespective of what the conferences/journals say, reviews are NEVER blind and
the people know each other in the community. No one "dares" to cross lines or
report in fear of being censured.

~~~
pmiller2
Exactly. I've experienced this in mathematics. Just knowing the topic of a
paper will typically narrow down the possible list of authors to about 5
people. From there, the writing style or other clues typically give it away.

------
mistercow
I thought this was going to be a study of papers written by undergrads for
class. I thought 10% sounded extremely low.

------
gcatalfamo
Elsevier talks about ethical problems. There is a joke in here somewhere.

------
bsullivan01
What happens to them, are they blacklisted from future publications?

