Hacker News new | comments | ask | show | jobs | submit login
Researcher at center of epic fraud remains an enigma to those who exposed him (sciencemag.org)
240 points by maxerickson 6 months ago | hide | past | web | favorite | 83 comments



Fraud is a problem, but I worry much more about research done sloppily but in good faith. Spotting errors can be very difficult, so it's best to avoid them altogether if possible. And given how difficult getting retractions was in this case, that's another reason to stop problems early on.

I haven't written too many peer reviews, but I was somewhat harsh in one I wrote because the paper was so bad. Multiple major problems, but I think it ultimately could be redeemed after a lot of work. The editor seems to have rejected the paper, and they sent me the other reviews for a reason unknown to me. I was surprised that none of the other reviewers noticed an error I found to be particularly obvious. Though one of the reviewers did notice a few problems I missed, so perhaps what's obvious to me is not necessarily obvious to others.


You reminded me of a really great post from the Data Genetics blog about estimating the amount of unseen errors in a document based on the errors actually found by independent reviewers. [0] It's such a simple idea but seems incredibly powerful.

[0] http://datagenetics.com/blog/december12015/index.html


Nice post. I've seen techniques like that used before, and I recommend their use.

One problem I see is that what counts as an "error" is ambiguous. I might call something one error, but others classify it as two errors. A more clear definition might be needed, but to first approximation the naive approach should be fine.


> they sent me the other reviews for a reason unknown to me

This is common in my field, though not every journal does it. For the ones that do, after a decision is made, all the reviewers will get a letter from the editor thanking them for reviewing the paper, notifying them what decision was made, and enclosing a copy of all the reviews for reference (often with a meta-review or review summary written by the editor). I find it helpful to see the other reviews, since I can improve my reviewing by paying more attention next time to things the other reviewers had caught that I'd missed, and just get a better idea of how calibrated my reviewing is with that of others.


Since you seem to be knowledgable about this subject, I'm curious about one thing, the sublinked article mentioned this:

> Sasai, 52, was a corresponding author on one of the papers and a co-author of the other.

What is a 'corresponding author' and how much of a credibility contribution is this vs being a 'co-author'? I'm assuming studies include letters written by other scientists who remotely reviewed data but were not fully involved in the study? Which is a lesser tier contribution than being a full co-author but still essentially 'signing-off' on the study?


There is only one paper (no separate letters), which can have multiple authors, if the finding was the result of collaborative work. How the collaboration is done and what authorship implies varies a lot by field though.

In mathematics, the expectation is that all people listed as authors of a paper contributed something significant to it and have co-written (or at least read and endorsed) the overall paper, so it really is a jointly authored work (if that's not true in a given case, there will usually be a footnote explaining). In some areas of the natural sciences, though, there can be large numbers of authors on a paper, because a study is the result of a lot of people's work. But in that case there isn't necessarily an expectation that every single author has verified and endorsed the entire study. For example some authors might be students who contributed a specific experiment, so got their name added as an author on the paper for their contribution to the study. But they may not be knowledgeable enough to have evaluated the rest of the paper, and probably also shouldn't be blamed if someone further down the analysis chain manipulated the data.

A convention in parts of the natural sciences with large author lists is that the junior scientist (postdoc, PhD student, etc.) who did the most practical work is listed as the first author (and called 'first author' or 'lead author'), while the most senior scientist, responsible for overseeing the entire project, is listed last and is the 'corresponding author'. Those two then get the most credit and responsibility, while 'everyone else' is sandwiched in the middle between them.

A 'corresponding author' is, literally speaking, just the author designated as being in charge of correspondence about the paper, with the journal editor and/or with readers who want to follow up (in some journals they're the one whose contact info is printed, although in other journals, all authors' contact info is included). In my field, computer science, 'corresponding author' tends to be mostly a bureaucratic role, the person responsible for handling emails from the journal, and is usually the same person as the first author. In other fields, it can carry more of an implication of the person in charge of the study. In some, the senior scientist's name is first, so they'll end up being both 'first' and 'corresponding' author. Really no consistency across areas, so the terms only mean much when talking to someone from the same field.

For example, this paper has 15 'authors' (same as 'co-authors'), of which the last one listed is the 'corresponding author' (indicated by the little mail icon): https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1574-6968....


Thank you, that makes sense. I was curious why the 'corresponding author' would feel such (Japanense-style) dishonour if it was largely the work of the other female scientist who lead the study. But I see that his role was a bit more involved as the most senior scientist (and also the person who hired/brought-in the young scientist who got discredited).

I've always meant to dig more into the whole academic journal/paper authorship thing. This was quite helpful.


Ideally the reviewers will have different areas of expertise. A statistician, for example, won't be an expert in the underlying science, but can evaluate the study design and analysis.


> I was surprised that none of the other reviewers noticed an error I found to be particularly obvious

Some reviewers spend very little time reviewing papers, and they may even not be expert of the topic.


The problem here is that the journals do not provide proper review. Perhaps it's a presumption of honesty/integeity, but they need to start asking if the experiment was plausible. If it would have been expensive, who funded it, etc...

It's sad that science has come to this.


The problem is that there are too many scientists chasing too few dollars- something's gotta give, and on the extreme side of things you end up getting fraud / faked data. This could be mitigated by open science practices and reforms to the review process to some extent, but these only address a symptom and not the root cause.

The primary solutions I can see (not necessarily mutually exclusive) are to a) reduce the costs of research equipment and staff substantially so that competition for funding becomes less relevant, b) provide a larger pot of state-sponsored scientific funding (industrial science tends to favor refining research more than performing tentative research so corporate funding would be an ill-advised solution), or c) to discourage young budding potential grad students from even going into the grad school track in the first place.

As an individual, option c) is something that I personally feel like I can make a difference with. Professors in undergrad can be really terrible about conveying the exact career options you can have in science without necessarily going for the PhD. Invisible support positions like HPC system administrators, research software engineers, and even to some extent hospital lab workers don't require hefty commitments to grad school but still allow one to live within the world of scientific discourse and contribute to the advancement of various fields as an enabler (perhaps less so for hospital workers). Unfortunately, the invisibility of these professions is a major problem, which is why there are organizations like UKRSE [0] that strive to generate more recognition for at least Research Software Engineering (curiously Google in the States seems to have been quick to recognize scientific programmers as distinct professionals relative to PhDs and other programmers, while academia in the US has not done so nearly as much). Also, for some support positions in IT there are non-science options that are much more lucrative.

[0] https://rse.ac.uk/


Since the whole scientific research machine needs a large number of PhD students and postdocs to perform research cheaply I don't see fixing c) until we solve the funding issue, since the reliance on cheap student/postdoc labor is partly a symptom of lack of funding.

Another point is that professors are very biased in that they successfully completed grad school, probably a postdoc if they are relatively new and then got a tenure track job. They then suggest that you follow a similar track, strong confirmation bias since you see their success and not the "failure" of the 90% who couldn't/didn't find jobs in academia.


Even having won the Tenure-Track Lotto, I don't do this.

Generally speaking, a student telling me they want a Ph.D in my field gets a strong warning and a frank discussion about the nature of the job market. I'm willing to support them if they simply cannot imagine doing anything else with their lives, but the pool of students for whom getting a Ph.D is actually a good idea is a very small pool indeed.


I routinely advise students to carefully consider whether or not grad school is for them.

The simple fact that each professor advises >10 grad students in their career is generally sufficient to open an undergraduate's eyes to reality.

Do it for love, not for money, and do it only with the recognition of the fiscal/life sacrifice that a Ph.D entails.


As do I - it's helpful to point out to students that at steady state, the replacement rate for a tenured professor is one. If the field is doubling in size over their lifetime...it's two.

But I'm also in a field where "alternative" careers don't carry quite the same stigma.


> The problem is that there are too many scientists chasing too few dollars- something's gotta give

In anything close to an efficient market, what gives is that you end up with fewer scientists.


Also one thing that surprised me is that peer reviews are not made blind to the reviewer.

Some things will always be made obvious specially if it's a small field and you know who is doing what.

But if you are reviewer and the 'very famous and respectable' colleague is co-authoring the paper you are reviewig, you are much more likely to dismiss any unclear or incorrect aspects of the paper.

It would seem like a really simple step to just redact any aspects of the paper that would give away names, locations, etc.


This is very field-dependent. In mine (political science) it's routine to blind the reviews in both directions. In others, papers are reviewed with the author's name and affiliation on the top.

There is a further issue that if you're in a small-enough (sub-)field you probably know what other people are working on and can effectively unblind the authors anyway.


On a large scale, assuming people are being honest is probably more functional and beneficial. The flow of research, in particular medical, seemed already covered with speed-bumps as it is.

And the fact that the fraud has been found out, even if a delay is involved, pleads in favour of the system.


The fact that fraud has been found out can mean two things:

1) That the system works so well that the only time fraud has occurred it has been found out

2) That the system works very poorly and this is the only time fraud has been found out


This is the other side of plagiarism. In third world countries, promotions depend on publications, so they just copy others papers. This is easy to catch with software with enough access to all journals.

In experimental fields, one just pulls data out of thin air or one just discards observations that are contrary to the hypothesis being tested. This attitude is so rampant in psychology, cognitive neuroscience, and disciplines that use statistics.

I’m quite sympathetic to some guy in some random university in Pakistan, India or China. His plagiarization involves submitting his publication to some fourth rate publication.

The real abusers are in the first world, their primary tool is experiments and data.


”In experimental fields, one just pulls data out of thin air or one just discards observations that are contrary to the hypothesis being tested. This attitude is so rampant in psychology, cognitive neuroscience, and disciplines that use statistics.”

Please cite direct evidence that this is “rampant”.

Articles making claims of “replication crisis” without direct evidence of fraud do not count. There’s a huge gap between failure to replicate and fraud, and quite frankly, most of the replication crisis media tends to repeat the same stories, providing little additional proof to back up the claims of “crisis”.


Let's start with our beloved Brian Wansink:

https://www.buzzfeednews.com/article/stephaniemlee/brian-wan...

The money quote is:

> "I don’t think I’ve ever done an interesting study where the data ‘came out’ the first time I looked at it,” he told her over email.


Wansink is a fraud. One case does not make for rampancy.


It wasn't just Wansink.

Everyone around him knew and participated to some extent.

His postdocs (except the one who said no), his PhD students, and all his other co-authors.

They all knew and never said a word.

When Cornell was told, they made some pious noises and ignored it until it became too public.

Journals ignored all criticism until it became too public.

This is a system that does not police fraud.


When I look at the grad school system, I am really confused. I really, really wanted to go to grad school at a top 10 history program, where I also did an undergrad honors' thesis. I was prepared to accept $18,000 a year as a living wage, and I was excited to be an adjunct for $35,000 a year with crazy hours and no health benefits in a small college town.

Now I am paid usually $45,000 - $60,000 per year to talk on a phone to people who need help filling out a web form in order to buy something. It's a bit absurd. I don't mind this work and it is good money. It keeps me and my girlfriend fed and sheltered.

Why is the university system so negligent and abusive towards those who invent new ideas and teach new inventors?


>"Either millions of students are overpaying every year, or the system provides some value."

Its the former, except the government is paying for it...

EDIT:

I say this as someone who got paid by the government via academia to torture rodents for no reason for years.


I can guarantee you that the $30,000 I wired from my bank account to my student loan company was not given to me by the government.

The government of course gave me the loan for inflated price of attending uni, but goddamnit I paid for it myself with my own blood and tears.


Grad school, in academic research not engineering and MBAs, is mostly done on scholarships/grants. Your tuition is paid by your PI using a grant from the government (or sometimes a private foundation or company).


No worries, we are talking about two different things.

I'm saying that when a grad student or adjunct teaches a class, they get paid a few thousand dollars. The students in that class are paying 100x that amount in tuition.

I'm implying that grad students should be paid more for this service, instead of the money going to administrator salaries and football stadiums.


Oh I agree. It's just if you're doing grad school and you're paying with student loans, not the government, you're doing it wrong.


Yes 100%. Sometimes it works out profitable for "professional schools", but definitely not for history majors or even sometimes biology majors.


I had a scholarship.


>"Why is the university system so negligent and abusive towards those who invent new ideas and teach new inventors?"

Have you considered the possibility that academia is paying researchers/etc what they are worth on average? Its just that the majority of people being paid are generating near zero, or negative, value to society.


I've definitely considered it. However, interestingly, to attend university in the USA usually carries a pricetag of $20,000 to $30,000 per year minimum after all the smoke and mirrors of "grants and financial aid".

So, you have consumers (students and their parents) paying real money for a real education that they are hoping will result in gainful employment. Either millions of students are overpaying every year, or the system provides some value.

Now, the ones delivering that value (grad students) are receiving maybe $8,000 per semester while teaching 100 students lets say, while the students are paying collectively nearly $1,000,000 for 4 months of tuitoon.

Where does the other $992,000 go, besides administrator salaries and football stadiums?


Besides administrators and sports programs, glamorous campus amenities and affirmative action admittees.


“Besides administrators and sports programs, glamorous campus amenities and affirmative action admittees.” Is affirmative action really that expensive?


I believe in funding affirmative action but yes it is expensive, the university accepts the students into the school and covers much of the cost of their housing and tuition. The actual cost isn't $60,000 per year like the sticker price says, but it is certainly around $8,000-12,000 in operational costs.


When you talk to your boss about salaries, you'll quickly find out that your salary isn't determined by what value you bring to the firm, it's by whatever the market will bear.


The value you bring to the firm is a factor in that it is a cap on what you'll get paid. Value is also relative, if a monkey can provide 90% of your value then you'll get paid a little more than peanuts.


If a monkey can provide 90% of your value, but there's a shortage of people who can provide the other 10%, you might still get paid well.

I mean, if the monkey doing the can generate 900k of value per year, but a human doing is can generate 1MM per year, then it would be better to hire the human for anything up to 96k/year (assuming peanuts cost less than 1k/quarter).


If you got balls, you can try to extort your replacement value. If you come with unique skills it may be much higher than what your resumee looks like.


That requires rational thinking and accounting skills on part of management, both are often in short supply, just look at the current Elon Musk meltdown... mission-hostile management is a thing.


Of course your value is relevant to your salary. It sets the upper limit.


In capitalism it's better to create a small amount of value that you can capture, than a large amount of value that you can't. You will be more rewarded if you can say I made those people buy $500k more in widgets then they would have otherwise bought last year and back that up with stats, then if you write a paper that leads to better widgets years down the line or even worse, write a paper on the sociocultural effects of widgets.


People rob gas stations, that doesn't mean that there's a system that doesn't police robbery. And that's still one case.


Yeah this is a case of someone on daytime television shows talking about dieting. There are plenty of fake doctors making claims in this realm. The scary part is the acceptance of fake science into the general public and especially it being perpetuated heavily by a large part of our political system.


That random guy in Pakistan, India, or China would absolutely submit his [1] publication to NEJM or Nature if he could get it accepted there.

[1] I say "his" because it seems like men are highly overrepresented in submitting falsified data. Obviously there are probably more men in science to start with, but consider that 30/30 of the most retracted scientists are men. https://retractionwatch.com/the-retraction-watch-leaderboard...


Catching a fraudster is a delicate combination of not them working hard enough to fake the data, close scrutiny (why would a study be subjected to abnormally close scrutiny?), and many other factors. It is also worth mentioning that failed replications do not necessarily lead to retractions - in fact, in most cases (of an honest mistake or an unlucky sampling) retracting the original paper wouldn't make any sense. Finally, not all retractions involve fraud, for example mathematicians will issue retractions when mistakes are pointed out.

For these reasons, retractions are not a good measure of behavior.


True. However, if your comment about is about gender, that would only be true if you think that women are less likely than men to get caught falsifying data (there was differential detection by gender). I can't imagine why that would be the case.

Edit: this was in response to your comment before your edit

I agree retractions are an imperfect measure, but it's not like there is much more to go on.

Most importantly, the 30 people on this leaderboard who have dozens of retracted papers are not just people who got unlucky with sampling or made one mistake and had to retract a paper–they are those who repeatedly and knowingly faked data and then published it.


Retractions aren't issued just for falsifying data. Each field has its own culture of retracting papers, some only retracting deliberate fraud (each one with a different standard of evidence), and others retracting mistakes as well. It is also well-known that certain countries contribute more than others to scientific fraud - and it would be unlikely for them to all have exactly the same gender ratio. By the time all of these factors (along with whatever else can be thought of) are included there might be a lot less statistical significance left than we started with.


The retractions of people on this top 30 list are generally for falsifying data, severe plagiarism, or other misbehavior which would fall into the category of deliberate fraud.

I don't disagree that the rate of misbehavior and gender balance of scientists differ internationally and would ideally be factored into an analysis.


Do we always have to do the gender discussion? Don't get me wrong, I'd love a real scientific analysis. But social media speculations with people trying to push their favorite ideology with at best some anecdotes seems like a huge waste of time.

But to play the game... maybe woman are more likely to get caught falsifying data before publication, because the evil patriarchy thinks woman cannot to science and thus more rigorously verifies their results? We for sure could come up with lots of other crappy explanations and theories.


I am a man with no gender agenda whatsoever. I was just trying to justify my pronoun :). Interesting to observe the level of vitriol pointing this out led to.


Then surely a generic CEO can be referred to as “him” since 29/30 of Fortune 500 CEOs are men.


Yes I agree men are statistically overrepresented as CEOs of fortune 500 companies. I don't think that's debatable.


30 is not even statistically significant... Plus, my belief at least, this is the tip of the iceberg for some fields. (So don't worry there will be more.)


It's extremely statistically significant, given any reasonable prior.

Don't take my word for it, here's a couple lines of R code.

  # vector of prior probabilities, natural choice is proportion of men
  prop.men <- seq(0, 1, length.out=100)

  # compute prob of observation (30/30) given prior probability
  pvals <- dbinom(x=30, size=30, prob=prop.men)

  plot(prop.men, 
       log10(pvals), 
       type="l", col="blue", lwd=2,
       ylab="log10 pval for 30/30 offenders being men", 
       xlab="Baseline male percentage in science"
       )
  abline(h=log10(0.05), col="red", lwd=2, lty=2) # "significance" line
  text(x=0.6, y=-30, lab="Below red line\nsignificant at α=0.05")

  # since prob vector is 100 long, indices of entries returned here
  # correspond to % men needed for this to have α > 0.05
  which(pvals>0.05)
The actual figure for completeness: https://imgur.com/a/E47AfSW

Of course this is a bit back-of-the-envelope done in 5 minutes, but you get the point.


30 vs. 0 is statistically significant for something that's theoretically a coin flip. It even beats a stringent five-sigma threshold.


Definitely!

If I were being pedantic [I guess I am :-) ], I'd probably say the best prior would be p =(n male scientists/number all scientists)

Overall I bet men are slightly more than 1/2, but even if it's an 80/20 split the p value for this would be many sigma.


Additionally, in older English grammar books, "his" is the default gender neutral pronoun. "They/their" is a modern invention with disputed acceptance. I personally prefer "they/their" over even "his or her"

EDIT: Please don't downvote statements of fact. My karma score pays the price for your momentary burst of dopamine.


Shakespeare used they, those grammar books are stupid.

http://itre.cis.upenn.edu/~myl/languagelog/archives/002748.h...


The Bernie Madoff off of clinical trials?

People in my immediate circle suffer from ailments mentioned in this article. Do these researchers have any conscience at all? Did they understand they're playing with people's lives?

I put them in the same category as the pharmacist who watered down the chemo drugs. There really should be criminal prosecutions and jail time for these crimes.


I think what was most scary about this article is that if Sato had just been slightly more "realistic" in his fake data - such as having fewer fake patients and slightly less dramatic results - he would have got away with it.

As such it's not really comparable to Madoff. Madoff would have been caught eventually whatever happened because what he was doing was fundamentally unsustainable. But this fraud was fairly close to being undiscovered. It was only caught because of the persistence of some researchers (for whom it wasn't even their job) and the unbelievable data.


He presumably killed himself after being exposed as a fraud. It's weird to me because malpractice is such a heavy thing in the US, but I don't think it extends into falsified research. Can a nation sue a researcher in another country because they funded studies based on their falsified data?


In the United States, this can actually happen if federal funding is involved, although it's pretty rare to my knowledge. Implemented by the Office of Research Integrity (ORI).


We don't need publish or perish. We need replicate or perish.


This. If study is not replicated, it might as well not exist.


https://en.wikipedia.org/wiki/Igor_and_Grichka_Bogdanoff

tl;dr: two brothers fake their way to a PhD and became TV celebrities and book authors in France. A lot of people have no idea about this and still respect them.


> A lot of people have no idea about this and still respect them.

That's sad indeed. They also write vulgarization books that sell well but are very bad and confused. I feel bad for the people who buy their books.


I think https://en.wikipedia.org/wiki/Bogdanov_affair is a better reference.


Can you give me a quick rundown on them?


They were celebrities and authors before the PhDs...


I was involved in a project with a researcher in Japan, it was impossible to replicate their work.

Numerous people (academics in the field) have suggested that the work was falsified.

I somehow can’t quite bring myself the believe it. I can’t understand what the motivation would be, or that any reasonable person would fake scientific data. And continue to do so, over a period years.

The STAP cell debacle is somewhat similar. What’s the endgame? Surely it becomes clear in the end that the work can not be replicated?

Can anyone shed some light on the motivation for these frauds? Is there something in particular about the Japanese ecosystem that makes them more common?


> Can anyone shed some light on the motivation for these frauds?

Money and prestige while it lasts?

> Is there something in particular about the Japanese ecosystem that makes them more common?

Well, the article states the following:

> Michiie Sakamoto, who is leading another investigation at Keio University, into Iwamoto's studies in animals, says it has to do with respect. "In Japan, we don't usually doubt a professor," he says. "We basically believe people. We think we don't need strict rules to watch them carefully." As a result, researchers faking their results may be exposed only after they have racked up many publications.


The motivation exists elsewhere. The statement “In Japan, we don’t usually doubt a professor” is also true elsewhere. I don’t usually read any paper with the assumption that data was fabricated (maybe selected, possibly presented in a way to tell the strongest story, but not out-right fake).

So, I still can’t quite put my finger on what is different about the Japanese ecosystem. Perhaps the checks and balances within departments are not as strong?

Outside Japan, a manager or department head seems to have a stronger supervisory role in my limited experience.


at what point does an author have so many retractions that their work is taken prima facia as untrustworthy?

IMO that answer should basically be two (maybe even one). And if a paper is not willing to issue a retraction on those grounds (which I would understand), they should at least flag the paper with a "retracted author" notice.

truth is a pendulum. we are living in the extreme end of one swing: a gilded age where "truth isn't truth". It's interesting to see how much synchronization of the pendulum there is in very different aspects of society (such as medicine and politics).


The behavior of JAMA Editor-in-Chief Howard Bauchner is appalling and frightening. If he's so unwilling to deal with fraud in his publishing, what is he letting people get away with in his position as Vice Chairman of Pediatrics at Boston University. He seems far too keen to sweep things under the rug.


How is it responsible, medically, to sit on a report of scientific fraud without action for two years, selling and profiting off bad data.


Are you being sardonic or is that a genuine question? It’s difficult to tell here since most comments are generally serious and not flippant.


Reads to me like an expression of disbelief.


I work in a completely different field and I can tell you, it's absolutely the same. I think the chance is quite high, that people in even higher positions know they can only keep their position stable by putting such kind of people in leadership positions underneath them, or this is a serious survival skill in leadership positions itself, which allows such people to stay in power and even raise the ranks, specifically because they are skilled at sweeping things under the rug. Both would be a rather disappointing realization about the world, but that's how it looks like.


I don’t think it is useful to have a statement as your username.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: