Hacker News new | comments | show | ask | jobs | submit login
Diederik Stapel’s Audacious Academic Fraud (nytimes.com)
65 points by gruseom 1492 days ago | hide | past | web | 43 comments | favorite



I was going to write a response, but the following comment from the site summarizes my thoughts: "If Staple's ambition says a lot about science as a business, then the credulity of his reviewers and colleagues speaks volumes about science as a religion. The sacred trust people place in science and the scientific method is sadly misplaced. The awe evoked by science popularizers like Carl Sagan and Neil deGrasse Tyson isn't all that different than the pomp and ceremony that surrounds the pope. And yet, it really should be the night sky that moves us, not the flattering self-portrait that science paints of ourselves and our intellects.

Simply put, science isn't all that different than other human endeavours wherein people compete for prizes. Stop putting scientists on a pedestal."


Science holds its practitioners up to standards and sometimes finds them wanting. If we didn't have scandals then that would be alarming. For many, many years there were no sex abuse scandals in the Catholic church. In retrospect, was that a good sign?

I don't see any sign of scientists being placed on pedestals in the US. What I see in the US is rampant anti-intellectualism, a cynicism about intellectual rigor, and cherry picking findings because they agree with ideological preconceptions. We're all guilty of it to some extent, but there are now a significant number of Americans who simply will not listen to well-reasoned argument.


The comment you quote moves too easily from the scientific method to scientists. Yes, scientists are human, and some of them are flawed, valuing adulation more than anything. The scientific method seems to be pretty good at flushing these people out, judging from the steady stream of stories about fraud in science.

To complete the analogy about the awe attached to some science popularizers, the Catholic Church doesn't seem quite as good at self-correction. I can't think of another human institution that is.


Another powerful quote (this one from the article), especially taken in the secular atmosphere of Europe:

'“People think of scientists as monks in a monastery looking out for the truth,” [Staple] said. “People have lost faith in the church, but they haven’t lost faith in science. My behavior shows that science is not holy.”'


Please keep in mind the difference between science and social science.


Academic science (at least what I have seen in North American research-intensive universities, as it has changed over the past 24 years) is no longer about scientific truth. The system is constructed (partially by design and partially due to circumstance, i.e. such scarce resources) such that scientific truth is no longer the goal. The goal now is self preservation and self promotion. The truth today is that it is no longer good enough to "just" be a good (or even great) scientist. Now you have to sparkle. Now in order to get postdoc funding you have to show how your science is not just great but also how it links up with industry. Now in order to get hired into a tenure track stream you not only need to have published your PhD work in prestigious journals but you must also show evidence to your hiring university of your ability to attract money. Now in order to get tenure you need much more than "just" a solid, respected track record of doing science. You need articles in high-end journals. You need patents. You need links with industry. You need to have been featured in the popular press. You need a track record of attracting money to the university. The more of this you produce, the higher your salary, the more perks you get at your home university.

Lots of people excuse the system by saying something like: well, sure, the system has changed, boo hoo, but after you get tenure you can stop being concerned about all the BS and return to just doing good science.

The thing is, people don't do that. People build a "brand" as a professor-scientist, and as their profile gets higher, so does their salary, so do their perks.

At my university, perversely, the highest-profile scientists, with the highest salaries, (1) teach zero. I mean zero. (2) travel 70% of the time away from home base. (3) supervise grad students by, truthfully, farming out the supervision duties to their postdocs, who by the way, often write the grants for the supervisor as a sort of write-of-passage (sic); (4) are trotted out by the university anytime they need to boast about how good the uni is.

The system has been twisted so that the goal is to get money and the means to that goal is to publish high-profile papers with simple stories in high-profile journals, and the means to THAT is to "do science".

The goal ought to be good science, and the means to that ought to be funding (money). It's backwards.

PS, I speak from experience, I got my PhD in 1999 and I've been tenured since 2006. I'm not a high-flyer (relative to the academic celebrities in my faculty) but I am on the A team, so to speak.

PPS I'm not excusing scientific fraud ... but it's important to understand the context in which it occurs.


This doesn't translate well to Dutch academia, where Stapel was working. Private funding is an irrelevant percentage of their income, and each school, prestigious or not, gets the exact same amount of money per student. The man had a tenure and kept doing it. He loved the media attention, loved appearing in TV shows to talk about every remarkable finding he published. He was more in the business of producing mass entertainment than knowledge, and simply didn't want to take the time to actually conduct the experiments his papers were supposedly based on.

To this day he continues to write more episodes to his reality TV show at a staggering pace. He has been writing books about how ashamed he is and keeps begging journalists to interview him about his downfall. I'm sure reading an article about himself in the New York Times gave him a massive hardon.


Yeah part of me thought, as I was reading the NYT article, why am I supporting this guy's personality disorder by reading what amounts to an extended biography of him?


One data point does not define a truth. What you describe may be true at your university but to generalize worldwide, as your opening sentence implies, is and definitely unscientific.


thank you ... I inserted an edit to make it clear this is based on my personal observations of various north american research-intensive universities over the past 24 yrs or so.


You say it has changed over the past 24 years, but in what way? Was there a time that this not the case?

Haven't scientists always been rewarded for being exceptional? Also for linking to industries and achieving pop-culture status?

What drives this change that did not drive it >24 years ago?


Yeah this is not entirely new ... but what I have seen over the past 8 years or so in particular, is a particular amplification of this pattern. I think it has to do with the relative scarcity of resources (i.e. money and positions) now compared to when I started as a graduate student.


I think what you're saying is right, but given the amount of fraud there has been in science in the past, I would have a hard time believing this is a new trend. There have always been people willing to do anything for fame/power/money. Science isn't immune to having such people.


I am not to worried about Stapel. He is an outlier, and just as all outliers he is getting his share of attention in the media.

I am in general much more worried about the dubious statistics and protocols that are deployed on real data. The torturing of the data, shoddy experiment design, the unpublished negative results, and the somewhat sobering realization that research is actually hard. Ioannidis formulates this quite elegantly in "why most research findings are false" [1]

One thing that is painstakingly obvious, is that this whole affair was only possible because people do not, and are not obliged to, share their data and code.

[1]: http://www.plosmedicine.org/article/info:doi/10.1371/journal...


I'm all for sharing data and code but don't you think that someone as determined to cheat as Stapel would have simply been a bit more careful in preparing his spreadsheet? Publishing carefully crafted fake data does not help, only replicating the experiment would but you can't do that for every publication. You can also demand that some kind of notary is present while performing the actual experiment, but ultimately you need a large amount of trust in science.


This is a very important idea that is fortunately gaining some traction in academic science at the moment ... that is, the need to share data and code. Some journals even require it (sharing data that is, I'm thinking of brain imaging journals).

My own opinion is that every academic scientist ought to, as a matter of course, make their data and code publicly available.


> The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me.

> Sitting at his kitchen table in Groningen, he began typing numbers into his laptop that would give him the outcome he wanted. He knew that the effect he was looking for had to be small in order to be believable; even the most successful psychology experiments rarely yield significant results. The math had to be done in reverse order: the individual attractiveness scores that subjects gave themselves on a 0-7 scale needed to be such that Stapel would get a small but significant difference in the average scores for each of the two conditions he was comparing. He made up individual scores like 4, 5, 3, 3 for subjects who were shown the attractive face. “I tried to make it random, which of course was very hard to do,” Stapel told me.

This sort of misconduct is shockingly common in academia, such that it is often not even seen as misconduct.


Really? The other elements (stopping once your data fits your hypothesis, ignoring contradicting data in analysis) are fairly common, but does any really think that making up data isn't misconduct?


As someone who works in science, I've never ever ever met someone who confessed to making up data (nor have I made up data myself).

Generating data and passing that of as measured data = fraud = loss of contract.


Contract? Are we taking about business or science?

Now I am really worried. Is the goal to produce "tangible results" (papers, press, professional aclaim, popular fame) or try and maybe, even probably, fail to uncover something new but true?

Is everything corrupt?


"Contract? Are we taking about business or science?"

Are you living under a rock? How do you think research is funded? The people actually doing research are funded through short-term contracts with "Tenure" being dangled in front of them as the proverbial carrot. However 1) most people starting out on this track never get tenure; and 2) those who do, mostly give up the nitty-gritty of research, because of a combination of reasons.

Sometimes I find the apparent lack of understanding of how academia actually works (as opposed to how undergrads and the general public with a university degree but no real exposure of the behind-the-curtains of academic research think it is, or feel it should be) just as jarring as the way things go on universities.


Yes I must have been living under a rock while academic research (we are not talking about corporate R&D) became completely dominated by the persuit of funding. Yes I've heard that grant writing has become a key "research" skill. And I know about all the academic scandals, the frequent inability to reproduce published results, the bias against publishing negative results and general abuse of statistics.

However as a non-academic science lover I still held on to the ideal that science was mostly about the pursuit of knowledge. I certainly got that impression from the few great researchers in medicine, economics, physics and math I happened to get to know. I guess they were just exceptions.

Thanks for opening my eyes. Science is a business. And academic research is just as corrupt as every other human activity. Why should I have ever though anything else.


Look man, I don't mean to rag on you. It's just - how do people think academics make a living? They're normal people just like the rest of us, with grocery bills and mortgages, and daydreams of having enough time to watch their kids grow up. Money is tight everywhere, and has been before the crisis, because so little research can be shown to have actual uses. Dreary-eyed grad students flock to academia hoping to not have to think about such mundane things as gasp money, full of youthful idealism about Advancing Human Knowledge(tm) only to find out that their contributions, in all likelihood, will not matter at all, and after 10 years in their university coming to terms with the fact that they'd rather find other ways to give meaning to their life rather than wasting it on more of the same publish-or-perish grind.

Obviously this is exaggerating things - some research is genuinely useful, and overall the state of the art advances in all fields, even if in very small increments; and very few academics (at least of the ones I know, which is a considerable number) drag themselves to their desks every day thinking about shooting themselves in the head. But the core point that academia isn't the romantic ideal that some have of it are undeniable, and won't be denied by anyone living the life (because that's what it is - a lifestyle, with advantages and drawbacks like any other).


When you work in science as a post-doc or research assistant, you (usually) have a 1 or 2 year contract, tenure is just for a select few professors. If you cheat, that's the end of your contract. Where do you see corruption in this?


Can anyone shed light on how a paper is published? I am guessing the process requires some higher authority to review the paper to check for legitimacy. If so, he committed fraud in 55 of his papers, not 1 or 2! So what irked me while I was reading is, "how was fraud missed so many times?"

Someone else said that this is common in academia...if so, something HAS to be done to stop it. That's why I would like to know the process behind a paper being successfully published--it seems like a broken system if fraud like this occurs remotely often.


Typically papers are reviewed by anonymous peers (other researchers in the field) and the journal's editors. This is why the fraud review board chastised the field of Psychology as a whole.

Note that it would have been nearly impossible to detect fraud in a single, isolated paper His data were fabricated in such a way as to seem believable, and it's not like researchers are expected to record video evidence of their experiments actually taking place or anything like that; they just report the results. He was ultimately revealed by those who were familiar with his work noticing patterns that appeared over many papers. I don't think there is any formal mechanism in place to review entire bodies of work as a whole.


Missing raw data should surely be a big red flag; maybe not video but surely you're expected to include the individual response numbers so that other people can check your analysis, and it sounds like he claimed to not even have those.


Usually these kinds of data are not part of the publication or up for review. I don't know why exactly, but I suppose it is a combination of: 1) paper publishing tradition, where distributing this material is too much work. 2) too much work for reviewers to comb through raw data, the real solution would be to require sharing of the raw data, so readers rather than the reviewers would find mistakes, but this is often difficult because of privacy issues. E.g. videos of the Utrecht train station study would make fraud more difficult, but publishing the videos would probably require permission from all participants 3) Trust in the authors' integrity. Most scientists would not make up data, but they might make logical errors in their reasoning, or use bad procedures. Peer review can find these errors, but if the authors lie about their procedures, it is very hard to check. Peer review is about internal consistency.


Psychology being an inexact science, it is difficult for someone else to repeat the same experiment and check the validity of the results (such as in the physical sciences). Even if someone does get different numbers, the validity of the repeat experiment itself can be questioned, or it can be put down to any number of variables. eg - the human subjects themselves will not the same.


The much bigger problem is that nobody tries to repeat findings, because there is no incentive to do so. There is an asymmetry between positive and negative findings. Say I make a study that shows that X causes Y. Then someone else tries to replicate that. They find no statistically significant evidence that X causes Y. Now, in medicine this may be a result in itself, if X causes Y is a well established result. But, in many other fields, such as computer science, a negative finding is usually just because the researcher made a mistake (bug in code etc.), so you have to work really hard to prove that there is no relation between X and Y, and it is hard to publish these kinds of results unless you can somehow rigorously prove the opposite result.


> it seems like a broken system if fraud like this occurs remotely often

That's the problem. It indeed is broken and not just in terms of fraud; in more ways than one but people who "game the system" call the shots and wouldn't like to fix it.


Papers are generally peer-reviewed. However, this nitwit was just outright falsifying his data - peer-review or a "higher authority" review wouldn't necessarily have caught cooked data. Open data policies may help.


Even still it just would have made it more work to fake more convincing raw data. Based on the article I don't get the sense this would have dissuaded him.


This article inspired me to write an essay about my disillusionment with academia after completing a PhD. I've posted it at http://mytimeinacademia.pen.io

HN comment thread: https://news.ycombinator.com/item?id=5624903 - I'd be grateful for your feedback


In situations like this, people should take an empathetic stance. Sure, what Stapel did was wrong, and he deserves no sympathy, but everyone deserves empathy, even criminals.

I'm not one who "blames the system" for malicious actions of individuals, as for anything to make sense, we must have free will. The system is not at fault, but it may be poorly constructed in such a way that individuals prone to abuse it can easily do so.

Regardless of whether I keep my front door unlocked, if I'm robbed, the robber deserves legal punishment. But this doesn't mean I should keep the door unlocked.

Ultimately, Stapel has paid for his actions. He's completely discredited as a scientist and researcher, and will never regain his reputation as an academic. But that's not the important question.

What we must ask is why and how this got so far. How can we change the system so people like Stapel will not be able to abuse it?


CICLing : Computational Linguistics and Intelligent Text Processing (Natural Language Processing) conference [0] uses an open data/software method to eliminate such frauds. They have a Verifiability, reproducibility, and working description policy which reads as follows

Starting from 2011, CICLing implements a policy of giving preference to papers with verifiable and reproducible results:

If the authors claim to have obtained some results, we encourage them to make all the input data necessary to verify and reproduce the results available to the community.

If the authors claim to advance human knowledge by introducing an algorithm, we encourage them to make the algorithm itself -- and not only its (usually vague and incomplete) description -- available to the public. We do not require any demo or tool based on your paper, but instead a form of proof and working description of the algorithm in addition to the verbal description given in your paper. An approximation of the idea is the code submitted with Church & Umemura's paper to be permanently hosted at CICLing servers, and cited in the paper (see last line): you see, we don't mean anything complicated. Obviously, you are also encouraged to show demo programs or tools based on your method, either as part of your talk or (better) at the demo session, and we will also be happy to host on our servers such software that complements your paper. However, this is not required. In contrast, we do believe that a publicly published scientific paper must be accompanied by a minimal working description of the algorithm, open-source and available to the community.

Also, we do not ask for impossible: if you present a large system, especially commercially distributed or a property of your company, then we do not expect you to provide the system. Our point is that when the software and data can be provided, it should be provided.

We do not yet have specific rules: we hope to elaborate the rules basing on this year's experience, so please use common sense. See the problems this policy is to address, as well as the list of software reviewing committee and instructions for the reviewers.

[0] http://www.cicling.org/2013/


Right, and which is perfectly useless to prevent what Stapel did. "We encourage them to make <etc>"? Instead of spending 5 days fabricating coherent data, you spend 6 - an additional day to come up with a plausible reason for not having to divulge raw data. I'm not a proponent of requiring everybody to submit their lab notes with every paper submission, in fact I have no credible solution to the whole problem at all - but feel-good policies like this are no more than something for editors of a journal to point at later when fraud is found, to be able to say "yeah but we did something" (i.e., the classical politician's fallacy).


> Right, and which is perfectly useless to prevent what Stapel did.

Nothing of that sort is claimed; just because you do not have a fool proof method doesn't mean you take limited steps to stop it. The least you can do is put hygiene checks. Most reputed conferences wouldn't even do that. Also, how and why is this a feel-good policy? The program committee of the conference is reputed enough to check for fraud/discrepancies. After all, the Stapel fraud (like many others) was detected by his peers. FYI - this is not a journal either.


You used the phrase 'to eliminate such frauds.' The suggestion (to me) is that this is a fool proof method that eliminates such frauds. I would contend this is not true; fraud is still relatively easy. The question is how much this promotes a false sense of security.

Stapel's fraud was only detected after many years, and after his results had been widely published. I still hear some of his conclusions quoted now and then as if it were established truth, by people who have not yet realised that it is one of the famous Stapel ones.


Some clarifications -

+ No method/suggestion is fool proof in the world of publications; simply can't be

+ Stapel's fraud was "data fraud" which is eliminated if your work on publicly available datasets

+ NLP community encourages work on public data and you can't get Stapel level of recognition if you do not have your data/tools out in the open

+ Any suggestion/method if not fool proof can be said to promote a false sense of security; it is important to start somewhere rather than maintain status quo


No doubt many scientists are ethical and do good work. Others may produce reports that are either not honest or not accurate.

I wonder how we could encourage honesty and accuracy - perhaps by getting funding agencies to increase the rewards for people who show others' work to have been deeply inaccurate, or worse, fraudulently wrong.

Consider: what were the global costs of Rogoff & Reinhart's error? How - and how much - will we reward Herndon, Ash, and Pollin for their diligence in uncovering that inaccuracy?

What would the long-term effects on science be if the revelation and correction of such inaccuracies were better rewarded, and the perpetration of any scientific fraud involved more effective penalties?


Ah, a way to encourage more academic infighting. What could go wrong! :P

I don't think an incentive like this would work very well. What you need is more people attempting to reproduce work, which means funding the attempt, not the result.


Less would be wrong. Plenty could still go wrong.

Consider whistle-blowing by grad students & post-docs - should that be rewarded as increasing accuracy, or punished as embarrassing to an institution?

Also, I mean science to include the (often better funded) areas beyond academia - think medicine and health care.

funding the attempt, not the result.

Possible outcomes: 1) Reproduces as expected (all real, or all faked?) 2) Reproduction fails (failed attempt, or not reproducible?) 3) Exposes fakery in previous experiments.

We should fund all of those, but reward 3) better afterward.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: