
Fraud, not error, accounts for most scientific retractions - davecap1
http://wavefunction.fieldofscience.com/2012/10/fraud-not-error-accounts-for-most.html
======
tisme
I've dated someone studying Biology/Biochemistry for a while. This was
interesting because the field itself is interesting so we had a lot to talk
about. She was doing a PhD in a subject that was important to me (diabetes)
and that I got to know quite a bit about.

Being a curious person I then started to read more about it and found a
published study that was done at a different university on the _exact same
subject_. There was no difference between experimental set-up, the parameters
and the results achieved to that date matched.

When confronted with this she responded that she already knew that, her PhD
advisor had selected the study and had presented it to the grant givers as a
novel approach. He clearly already knew what the outcome would be, and the
fact that it would be positive and that the grant gatekeepers would not find
out about the other study until it was either too late (which would mean
they'd keep quiet because they had not done their jobs properly) or at all
(because nobody really cared). She didn't want to rock the boat just wanted
her PhD over and done with, confronting the advisor would have been the end of
her career for sure, whereas duplicating a bunch of work that you already know
the outcome of is ok.

This study was quite extensive and burned through a whole pile of resources.

At the time I was quite shocked by all this, that in academia there would be
such cynical misconduct. According to the lady this was perfectly normal and
par for the course in her field.

It's one thing for a group to openly try to duplicate the results of another,
it is a different thing to use the vastness of the scientific body as a means
to re-do success stories found elsewhere to increase your local standing.

There is a Samwer Brothers parallel in there somewhere.

~~~
digeridoo
In principle, duplicating results is a good thing, it's what prevents fraud.
The incentives are wrong here, and it isn't proper academic conduct, but it's
still valuable science.

------
kghose
Abstract of the article:

A detailed review of all 2,047 biomedical and life-science research articles
indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of
retractions were attributable to error. In contrast, 67.4% of retractions were
attributable to misconduct, including fraud or suspected fraud (43.4%),
duplicate publication (14.2%), and plagiarism (9.8%). Incomplete,
uninformative or misleading retraction announcements have led to a previous
underestimation of the role of fraud in the ongoing retraction epidemic. The
percentage of scientific articles retracted because of fraud has increased
∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic
patterns that may reveal underlying causes.

In general (Table 1) Retractions are not strongly related to Journal prestige,
which was interesting to me. The Nature Neuroscience, Neuron and Journal of
Neuroscience do not feature in the list, which, in my eyes, raises their
prestige, or point to fraud in Neuroscience being harder to detect.

A side note: In the paper the authors use 3D pie charts with a bad color
scheme (Fig 2. breakup by country). No percentages are given. Please don't do
this folks, we're professionals here, not the sales dept.

One clarification to the blog post: it mentions duplication, which in science
I consider a very good thing. I wish there were more papers that said 'we
replicated your finding, thereby increasing it's credibility'. The paper
authors refer to duplicate publication, which means taking the same paper and
submitting it repeatedly to several journals to increase publications. A
dishonest thing not from a scientific data point of view, but a scientific
credit point of view.

~~~
JoeAltmaier
I'm concerned with the poor statistical analysis in the article. I would hope
that folks citing bad papers would publish a good one on the subject.

In particular, as you mention, no idea what the per-capita rates are- just
that the US and Germany are the biggest offenders.

Then, they claim its harder to detect than they thought - right alongside the
time-indexed bar graph of a huge bump in detected fraud in the last few years.
Apparently its not easier to detect than every before? Or are they more
frequent? Or is there a sampling problem (incomplete data for earlier
years/less screening/different methodologies?)

I suspect its because publishing is easier than ever before. We see a lot more
science-theatre and less rigorous funded research.

------
codex
Say 1% of papers are fraudulent and 10% have serious errors. Now assume it is
10x easier to prove fraud than to prove, beyond reasonable doubt, that an
error was made. The result is that 50% of retractions are due to fraud, but
still only 1% of all papers are fraudulent.

~~~
Alex3917
The percent of papers that are fraudulent is probably more like 95+% if you
count:

\- Papers where the authors fabricate data

\- Papers where bad data is hidden

\- Papers with named authors who didn't contribute to the research. (This
alone is ~30% of all papers in medical journals.)

\- Papers with conflicts of interest that weren't reported.

\- Papers with improper usage of statistics.

\- Papers where the results and conclusions don't match the data.

\- Papers where the results / conclusion aren't epistemologically justified
for other reasons.

\- Papers that weren't published/written because they showed negative results.

\- Papers that don't follow the methodological best practices for their given
area of research.

\- Papers that incorrectly summarize previous research.

\- Papers that rely on flawed assumptions.

\- Papers that purposely ask the wrong question(s) in order to mislead the
reader.

\- Papers that are intellectually dishonest in some other way.

Basically if you want to know the percentage of research that is truly
fraudulent when you use a more expansive definition, 95% is probably the lower
bound for many if not most academic fields.

~~~
JoeAltmaier
That's a persuasive argument. But as a research paper, you would probably rate
it as fraudulent by your own criterion. Where did that number '95%' come from?
I suspect you made it up.

Still, disregarding the irony, a good point.

~~~
Alex3917
"Where did that number '95%' come from? I suspect you made it up."

I don't think there is any research that has a single group of raters look at
all of these categories at once, but there is a lot of research that looks at
them separately. E.g. as I mentioned, "According to a study published in JAMA,
11 percent of the articles published in peer-reviewed medical journals are
written by 'ghost authors.' (And 19 percent of the articles named 'honorary
authors' who had not contributed enough to the research and writing to justify
being listed as authors.)" That's up to 30% of all papers that are fraudulent
right off the bat.

Because many of these categories have been looked at separately it's not hard
to figure out that the total for all research has to be at least 95%. Also, if
you look at what percentage of papers can't be replicated:

[http://www.newscientist.com/article/mg21528826.000-is-
medica...](http://www.newscientist.com/article/mg21528826.000-is-medical-
science-built-on-shaky-foundations.html)

It's clear that the vast majority of research seems to be just made up out of
thin air to begin with. And the percentage of research that can be replicated
but that is fraudulent in some other way is similarly high. What it comes down
to is that much if not most of modern science is basically just a giant make-
work program for pseudo-intellectuals that's designed to produce marketing
propaganda for corporations and prevent political unrest.

~~~
JoeAltmaier
...and a lot more anecdotal 'evidence'. That article for instance - seems to
be a guy with an axe to grind - he's founder of the Science Exchange, a web
company with services to sell. By writing a scare story on irreproducible
results hes drumming up business. How does that distinguish him from Amgen
etc?

I'm not trying to be mean, just mentioning its very easy to write opinion that
appears as science and gets misinterpreted or given too much weight. The
posting about the Decline Effect says it better.

~~~
Alex3917
"That article for instance - seems to be a guy with an axe to grind"

I don't see anything in the article to indicate this, and also the studies
he's citing weren't even written by him. And plus it's not like he's the only
one to say this, e.g.:

[http://www.theatlantic.com/magazine/print/2010/11/lies-
damne...](http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-
and-medical-science/308269/)

I realize that he also has a financial conflict of interest. The difference is
that A) he's properly disclosed his conflict of interest and B) he isn't
wrong.

~~~
JoeAltmaier
The studies he's citing have been cherry-picked by him to support his opinion
(that article was published under Opinion).

Not seeing the conflict, the bias, is really the problem with current
publishing. Most anything that can be published, is. And none of it comes
well-labelled. Journalism substitutes for analysis; dry research is summarized
carelessly or with stress on the sensational.

~~~
Alex3917
"The studies he's citing have been cherry-picked by him to support his
opinion."

What makes you think they have been cherry picked? The studies he cites (and
the one by Ionnidis I mentioned) are the only studies of reproducibility that
I've seen, and they're the only ones that the Wikipedia article on
reproducibility cites.

------
beloch
I'd like to see these numbers expressed as a percentage of total publications.
While fraud, etc. may be on the rise in terms of absolute numbers, perhaps a
better question is whether the _rate_ is also rising or if this is just the
effect of an ever-increasing number of publications per year.

------
davecap1
Original PNAS paper:
[http://www.pnas.org/content/early/2012/09/27/1212247109.abst...](http://www.pnas.org/content/early/2012/09/27/1212247109.abstract)

------
Detrus
I wonder how this relates to the Decline Effect
<http://en.wikipedia.org/wiki/Decline_effect>

~~~
thecabinet
Bingo! An absolutely wonderful blog entry on the decline effect is
[http://thelastpsychiatrist.com/2011/02/the_decline_effect_is...](http://thelastpsychiatrist.com/2011/02/the_decline_effect_is_stupid.html)

------
arupchak
Another way to combat this is make it easier for people to publish data that
goes against their hypothesis and still give them 'points' for it. Part of the
problem in academic labs is that there is this race to publish papers to get
more funding to publish more papers to get more funding and so on. If journals
stopped treating data that goes against an initial hypothesis as 'bad data'
there would be a incentive to still publish it, and ideally, prevent someone
else from repeating the same experiment/hypothesis.

~~~
thronemonkey
The problem isn't just what journals will publish, its how funding is
distributed: the modern NIH/NSF grant process gives money for likelihood of
producing positive results, which might sounds reasonable on the surface but
which leads to perverse incentives like these.

------
taejo
If fraud is discovered, it should always lead to a retraction. Errors, on the
other hand, are part of the scientific process. Often, the correct response is
just to publish another paper.

------
gruseom
_In a field as messy as biology, it's easy to make stuff up_

If 2/3 of the retracted papers are fraudulent, how many of the unretracted
papers are?

~~~
HarryHirsch
That's beside the point. The people in any field know their colleagues, and
know whose results are suspect. One should ask rather, how much taxpayer's
money was spent on science fiction, and why it tends to take several years
until the professor in charge is busted. After all, there are usually several
grad students in on the joke, it's not that that sort of fraud was perpetrated
in secret.

~~~
Detrus
If we assume the grad students are in on it, it's because they'd want to move
their careers forward. They're probably working in well established fields
where returns for effort spent are diminishing. See the chart at
[http://cognitivesocialweb.com/home/2012/8/10/the-
singularity...](http://cognitivesocialweb.com/home/2012/8/10/the-singularity-
is-not-coming.html) So a combination of making no progress after years of
effort, careerism or trying to stay in their field may motivate fraud.

But I wouldn't be so sure about grad students who may handle smaller pieces of
the whole project always being in on it.

And either way, without fraud in these stupid difficult fields a lot more
smartypants would be on Wall St., also committing fraud.

------
brador
How many are not caught? Particularly in non-scientific-method subjects like
psychology? Or data heavy non repeatables like sociology?

~~~
hga
The general theory on this is that if the results are important _and_
something people can build on, when other labs try the latter they discover
something's wrong.

Things get messy where the results are viewed as important but they don't
naturally lead to more dependent experiments, at least not any time soon. Look
at the epidemiological research on dietary fat over the last 40+ years for a
sobering lesson in how badly things can go wrong.

Of course another factor is politics, internal and external to science. If
your results show a study or worse sub-field is _wrong_ , well, I don't have
to tell you what happens. Except that your results _might_ get accepted when
the existing gatekeepers die off.

ADDED: in the relatively hard sciences, there are few fields more
"politicized" than the "biomedical and life sciences". The work is _difficult_
, math is generally not a forte of the researchers, the greater stakes are
high (e.g. "a cure for cancer"), etc.

Still, science seems to do this better than most other fields.

