
Researcher at center of epic fraud remains an enigma to those who exposed him - maxerickson
http://www.sciencemag.org/news/2018/08/researcher-center-epic-fraud-remains-enigma-those-who-exposed-him
======
btrettel
Fraud is a problem, but I worry much more about research done sloppily but in
good faith. Spotting errors can be very difficult, so it's best to avoid them
altogether if possible. And given how difficult getting retractions was in
this case, that's another reason to stop problems early on.

I haven't written too many peer reviews, but I was somewhat harsh in one I
wrote because the paper was so bad. Multiple major problems, but I think it
ultimately could be redeemed after a lot of work. The editor seems to have
rejected the paper, and they sent me the other reviews for a reason unknown to
me. I was surprised that none of the other reviewers noticed an error I found
to be particularly obvious. Though one of the reviewers did notice a few
problems I missed, so perhaps what's obvious to me is not necessarily obvious
to others.

~~~
OisinMoran
You reminded me of a really great post from the Data Genetics blog about
estimating the amount of unseen errors in a document based on the errors
actually found by independent reviewers. [0] It's such a simple idea but seems
incredibly powerful.

[0]
[http://datagenetics.com/blog/december12015/index.html](http://datagenetics.com/blog/december12015/index.html)

~~~
btrettel
Nice post. I've seen techniques like that used before, and I recommend their
use.

One problem I see is that what counts as an "error" is ambiguous. I might call
something one error, but others classify it as two errors. A more clear
definition might be needed, but to first approximation the naive approach
should be fine.

------
phkahler
The problem here is that the journals do not provide proper review. Perhaps
it's a presumption of honesty/integeity, but they need to start asking if the
experiment was plausible. If it would have been expensive, who funded it,
etc...

It's sad that science has come to this.

~~~
neuromantik8086
The problem is that there are too many scientists chasing too few dollars-
something's gotta give, and on the extreme side of things you end up getting
fraud / faked data. This could be mitigated by open science practices and
reforms to the review process to some extent, but these only address a symptom
and not the root cause.

The primary solutions I can see (not necessarily mutually exclusive) are to a)
reduce the costs of research equipment and staff substantially so that
competition for funding becomes less relevant, b) provide a larger pot of
state-sponsored scientific funding (industrial science tends to favor refining
research more than performing tentative research so corporate funding would be
an ill-advised solution), or c) to discourage young budding potential grad
students from even going into the grad school track in the first place.

As an individual, option c) is something that I personally feel like I can
make a difference with. Professors in undergrad can be really terrible about
conveying the exact career options you can have in science without necessarily
going for the PhD. Invisible support positions like HPC system administrators,
research software engineers, and even to some extent hospital lab workers
don't require hefty commitments to grad school but still allow one to live
within the world of scientific discourse and contribute to the advancement of
various fields as an enabler (perhaps less so for hospital workers).
Unfortunately, the invisibility of these professions is a major problem, which
is why there are organizations like UKRSE [0] that strive to generate more
recognition for at least Research Software Engineering (curiously Google in
the States seems to have been quick to recognize scientific programmers as
distinct professionals relative to PhDs and other programmers, while academia
in the US has not done so nearly as much). Also, for some support positions in
IT there are non-science options that are much more lucrative.

[0] [https://rse.ac.uk/](https://rse.ac.uk/)

~~~
s0rce
Since the whole scientific research machine needs a large number of PhD
students and postdocs to perform research cheaply I don't see fixing c) until
we solve the funding issue, since the reliance on cheap student/postdoc labor
is partly a symptom of lack of funding.

Another point is that professors are very biased in that they successfully
completed grad school, probably a postdoc if they are relatively new and then
got a tenure track job. They then suggest that you follow a similar track,
strong confirmation bias since you see their success and not the "failure" of
the 90% who couldn't/didn't find jobs in academia.

~~~
ISL
I routinely advise students to carefully consider whether or not grad school
is for them.

The simple fact that each professor advises >10 grad students in their career
is generally sufficient to open an undergraduate's eyes to reality.

Do it for love, not for money, and do it only with the recognition of the
fiscal/life sacrifice that a Ph.D entails.

~~~
Fomite
As do I - it's helpful to point out to students that at steady state, the
replacement rate for a tenured professor is _one_. If the field is doubling in
size over their lifetime...it's two.

But I'm also in a field where "alternative" careers don't carry quite the same
stigma.

------
raincom
This is the other side of plagiarism. In third world countries, promotions
depend on publications, so they just copy others papers. This is easy to catch
with software with enough access to all journals.

In experimental fields, one just pulls data out of thin air or one just
discards observations that are contrary to the hypothesis being tested. This
attitude is so rampant in psychology, cognitive neuroscience, and disciplines
that use statistics.

I’m quite sympathetic to some guy in some random university in Pakistan, India
or China. His plagiarization involves submitting his publication to some
fourth rate publication.

The real abusers are in the first world, their primary tool is experiments and
data.

~~~
et2o
That random guy in Pakistan, India, or China would absolutely submit his [1]
publication to NEJM or Nature if he could get it accepted there.

[1] I say "his" because it seems like men are highly overrepresented in
submitting falsified data. Obviously there are probably more men in science to
start with, but consider that 30/30 of the most retracted scientists are men.
[https://retractionwatch.com/the-retraction-watch-
leaderboard...](https://retractionwatch.com/the-retraction-watch-leaderboard/)

~~~
eftychis
30 is not even statistically significant... Plus, my belief at least, this is
the tip of the iceberg for some fields. (So don't worry there will be more.)

~~~
Dylan16807
30 vs. 0 is statistically significant for something that's theoretically a
coin flip. It even beats a stringent five-sigma threshold.

~~~
et2o
Definitely!

If I were being pedantic [I guess I am :-) ], I'd probably say the best prior
would be p =(n male scientists/number all scientists)

Overall I bet men are slightly more than 1/2, but even if it's an 80/20 split
the p value for this would be many sigma.

------
danbmil99
The Bernie Madoff off of clinical trials?

People in my immediate circle suffer from ailments mentioned in this article.
Do these researchers have any conscience at all? Did they understand they're
playing with people's lives?

I put them in the same category as the pharmacist who watered down the chemo
drugs. There really should be criminal prosecutions and jail time for these
crimes.

~~~
rwmj
I think what was most scary about this article is that if Sato had just been
slightly more "realistic" in his fake data - such as having fewer fake
patients and slightly less dramatic results - he would have got away with it.

As such it's not really comparable to Madoff. Madoff would have been caught
eventually whatever happened because what he was doing was fundamentally
unsustainable. But this fraud was fairly close to being undiscovered. It was
only caught because of the persistence of some researchers (for whom it wasn't
even their job) and the unbelievable data.

------
ggm
We don't need publish or perish. We need replicate or perish.

~~~
Drdrdrq
This. If study is not replicated, it might as well not exist.

------
baby
[https://en.wikipedia.org/wiki/Igor_and_Grichka_Bogdanoff](https://en.wikipedia.org/wiki/Igor_and_Grichka_Bogdanoff)

tl;dr: two brothers fake their way to a PhD and became TV celebrities and book
authors in France. A lot of people have no idea about this and still respect
them.

~~~
yodsanklai
> A lot of people have no idea about this and still respect them.

That's sad indeed. They also write vulgarization books that sell well but are
very bad and confused. I feel bad for the people who buy their books.

------
xevb3k
I was involved in a project with a researcher in Japan, it was impossible to
replicate their work.

Numerous people (academics in the field) have suggested that the work was
falsified.

I somehow can’t quite bring myself the believe it. I can’t understand what the
motivation would be, or that any reasonable person would fake scientific data.
And continue to do so, over a period years.

The STAP cell debacle is somewhat similar. What’s the endgame? Surely it
becomes clear in the end that the work can not be replicated?

Can anyone shed some light on the motivation for these frauds? Is there
something in particular about the Japanese ecosystem that makes them more
common?

~~~
hh3k0
> Can anyone shed some light on the motivation for these frauds?

Money and prestige while it lasts?

> Is there something in particular about the Japanese ecosystem that makes
> them more common?

Well, the article states the following:

> Michiie Sakamoto, who is leading another investigation at Keio University,
> into Iwamoto's studies in animals, says it has to do with respect. "In
> Japan, we don't usually doubt a professor," he says. "We basically believe
> people. We think we don't need strict rules to watch them carefully." As a
> result, researchers faking their results may be exposed only after they have
> racked up many publications.

~~~
xevb3k
The motivation exists elsewhere. The statement “In Japan, we don’t usually
doubt a professor” is also true elsewhere. I don’t usually read any paper with
the assumption that data was fabricated (maybe selected, possibly presented in
a way to tell the strongest story, but not out-right fake).

So, I still can’t quite put my finger on what is different about the Japanese
ecosystem. Perhaps the checks and balances within departments are not as
strong?

Outside Japan, a manager or department head seems to have a stronger
supervisory role in my limited experience.

------
appleflaxen
at what point does an author have so many retractions that their work is taken
prima facia as untrustworthy?

IMO that answer should basically be two (maybe even one). And if a paper is
not willing to issue a retraction on those grounds (which I would understand),
they should at least flag the paper with a "retracted author" notice.

truth is a pendulum. we are living in the extreme end of one swing: a gilded
age where "truth isn't truth". It's interesting to see how much
synchronization of the pendulum there is in very different aspects of society
(such as medicine and politics).

------
google_censors
The behavior of JAMA Editor-in-Chief Howard Bauchner is appalling and
frightening. If he's so unwilling to deal with fraud in his publishing, what
is he letting people get away with in his position as Vice Chairman of
Pediatrics at Boston University. He seems far too keen to sweep things under
the rug.

~~~
amckinlay
How is it responsible, medically, to sit on a report of scientific fraud
without action for two years, selling and profiting off bad data.

~~~
brennebeck
Are you being sardonic or is that a genuine question? It’s difficult to tell
here since most comments are generally serious and not flippant.

~~~
detuur
Reads to me like an expression of disbelief.

