Simply put, science isn't all that different than other human endeavours wherein people compete for prizes. Stop putting scientists on a pedestal."
I don't see any sign of scientists being placed on pedestals in the US. What I see in the US is rampant anti-intellectualism, a cynicism about intellectual rigor, and cherry picking findings because they agree with ideological preconceptions. We're all guilty of it to some extent, but there are now a significant number of Americans who simply will not listen to well-reasoned argument.
To complete the analogy about the awe attached to some science popularizers, the Catholic Church doesn't seem quite as good at self-correction. I can't think of another human institution that is.
'“People think of scientists as monks in a monastery looking out for the truth,” [Staple] said. “People have lost faith in the church, but they haven’t lost faith in science. My behavior shows that science is not holy.”'
Lots of people excuse the system by saying something like: well, sure, the system has changed, boo hoo, but after you get tenure you can stop being concerned about all the BS and return to just doing good science.
The thing is, people don't do that. People build a "brand" as a professor-scientist, and as their profile gets higher, so does their salary, so do their perks.
At my university, perversely, the highest-profile scientists, with the highest salaries, (1) teach zero. I mean zero. (2) travel 70% of the time away from home base. (3) supervise grad students by, truthfully, farming out the supervision duties to their postdocs, who by the way, often write the grants for the supervisor as a sort of write-of-passage (sic); (4) are trotted out by the university anytime they need to boast about how good the uni is.
The system has been twisted so that the goal is to get money and the means to that goal is to publish high-profile papers with simple stories in high-profile journals, and the means to THAT is to "do science".
The goal ought to be good science, and the means to that ought to be funding (money). It's backwards.
PS, I speak from experience, I got my PhD in 1999 and I've been tenured since 2006. I'm not a high-flyer (relative to the academic celebrities in my faculty) but I am on the A team, so to speak.
PPS I'm not excusing scientific fraud ... but it's important to understand the context in which it occurs.
To this day he continues to write more episodes to his reality TV show at a staggering pace. He has been writing books about how ashamed he is and keeps begging journalists to interview him about his downfall. I'm sure reading an article about himself in the New York Times gave him a massive hardon.
Haven't scientists always been rewarded for being exceptional? Also for linking to industries and achieving pop-culture status?
What drives this change that did not drive it >24 years ago?
I am in general much more worried about the dubious statistics and protocols that are deployed on real data. The torturing of the data, shoddy experiment design, the unpublished negative results, and the somewhat sobering realization that research is actually hard. Ioannidis formulates this quite elegantly in "why most research findings are false" 
One thing that is painstakingly obvious, is that this whole affair was only possible because people do not, and are not obliged to, share their data and code.
My own opinion is that every academic scientist ought to, as a matter of course, make their data and code publicly available.
> Sitting at his kitchen table in Groningen, he began typing numbers into his laptop that would give him the outcome he wanted. He knew that the effect he was looking for had to be small in order to be believable; even the most successful psychology experiments rarely yield significant results. The math had to be done in reverse order: the individual attractiveness scores that subjects gave themselves on a 0-7 scale needed to be such that Stapel would get a small but significant difference in the average scores for each of the two conditions he was comparing. He made up individual scores like 4, 5, 3, 3 for subjects who were shown the attractive face. “I tried to make it random, which of course was very hard to do,” Stapel told me.
This sort of misconduct is shockingly common in academia, such that it is often not even seen as misconduct.
Generating data and passing that of as measured data = fraud = loss of contract.
Now I am really worried. Is the goal to produce "tangible results" (papers, press, professional aclaim, popular fame) or try and maybe, even probably, fail to uncover something new but true?
Is everything corrupt?
Are you living under a rock? How do you think research is funded? The people actually doing research are funded through short-term contracts with "Tenure" being dangled in front of them as the proverbial carrot. However 1) most people starting out on this track never get tenure; and 2) those who do, mostly give up the nitty-gritty of research, because of a combination of reasons.
Sometimes I find the apparent lack of understanding of how academia actually works (as opposed to how undergrads and the general public with a university degree but no real exposure of the behind-the-curtains of academic research think it is, or feel it should be) just as jarring as the way things go on universities.
However as a non-academic science lover I still held on to the ideal that science was mostly about the pursuit of knowledge. I certainly got that impression from the few great researchers in medicine, economics, physics and math I happened to get to know. I guess they were just exceptions.
Thanks for opening my eyes. Science is a business. And academic research is just as corrupt as every other human activity. Why should I have ever though anything else.
Obviously this is exaggerating things - some research is genuinely useful, and overall the state of the art advances in all fields, even if in very small increments; and very few academics (at least of the ones I know, which is a considerable number) drag themselves to their desks every day thinking about shooting themselves in the head. But the core point that academia isn't the romantic ideal that some have of it are undeniable, and won't be denied by anyone living the life (because that's what it is - a lifestyle, with advantages and drawbacks like any other).
Someone else said that this is common in academia...if so, something HAS to be done to stop it. That's why I would like to know the process behind a paper being successfully published--it seems like a broken system if fraud like this occurs remotely often.
Note that it would have been nearly impossible to detect fraud in a single, isolated paper His data were fabricated in such a way as to seem believable, and it's not like researchers are expected to record video evidence of their experiments actually taking place or anything like that; they just report the results. He was ultimately revealed by those who were familiar with his work noticing patterns that appeared over many papers. I don't think there is any formal mechanism in place to review entire bodies of work as a whole.
That's the problem. It indeed is broken and not just in terms of fraud; in more ways than one but people who "game the system" call the shots and wouldn't like to fix it.
HN comment thread: https://news.ycombinator.com/item?id=5624903 - I'd be grateful for your feedback
I'm not one who "blames the system" for malicious actions of individuals, as for anything to make sense, we must have free will. The system is not at fault, but it may be poorly constructed in such a way that individuals prone to abuse it can easily do so.
Regardless of whether I keep my front door unlocked, if I'm robbed, the robber deserves legal punishment. But this doesn't mean I should keep the door unlocked.
Ultimately, Stapel has paid for his actions. He's completely discredited as a scientist and researcher, and will never regain his reputation as an academic. But that's not the important question.
What we must ask is why and how this got so far. How can we change the system so people like Stapel will not be able to abuse it?
Starting from 2011, CICLing implements a policy of giving preference to papers with verifiable and reproducible results:
If the authors claim to have obtained some results, we encourage them to make all the input data necessary to verify and reproduce the results available to the community.
If the authors claim to advance human knowledge by introducing an algorithm, we encourage them to make the algorithm itself -- and not only its (usually vague and incomplete) description -- available to the public.
We do not require any demo or tool based on your paper, but instead a form of proof and working description of the algorithm in addition to the verbal description given in your paper. An approximation of the idea is the code submitted with Church & Umemura's paper to be permanently hosted at CICLing servers, and cited in the paper (see last line): you see, we don't mean anything complicated. Obviously, you are also encouraged to show demo programs or tools based on your method, either as part of your talk or (better) at the demo session, and we will also be happy to host on our servers such software that complements your paper. However, this is not required. In contrast, we do believe that a publicly published scientific paper must be accompanied by a minimal working description of the algorithm, open-source and available to the community.
Also, we do not ask for impossible: if you present a large system, especially commercially distributed or a property of your company, then we do not expect you to provide the system. Our point is that when the software and data can be provided, it should be provided.
We do not yet have specific rules: we hope to elaborate the rules basing on this year's experience, so please use common sense. See the problems this policy is to address, as well as the list of software reviewing committee and instructions for the reviewers.
Nothing of that sort is claimed; just because you do not have a fool proof method doesn't mean you take limited steps to stop it. The least you can do is put hygiene checks. Most reputed conferences wouldn't even do that. Also, how and why is this a feel-good policy? The program committee of the conference is reputed enough to check for fraud/discrepancies. After all, the Stapel fraud (like many others) was detected by his peers. FYI - this is not a journal either.
Stapel's fraud was only detected after many years, and after his results had been widely published. I still hear some of his conclusions quoted now and then as if it were established truth, by people who have not yet realised that it is one of the famous Stapel ones.
+ No method/suggestion is fool proof in the world of publications; simply can't be
+ Stapel's fraud was "data fraud" which is eliminated if your work on publicly available datasets
+ NLP community encourages work on public data and you can't get Stapel level of recognition if you do not have your data/tools out in the open
+ Any suggestion/method if not fool proof can be said to promote a false sense of security; it is important to start somewhere rather than maintain status quo
I wonder how we could encourage honesty and accuracy - perhaps by getting
funding agencies to increase the rewards for people who show others'
work to have been deeply inaccurate, or worse, fraudulently wrong.
Consider: what were the global costs of Rogoff & Reinhart's error?
How - and how much - will we reward Herndon, Ash, and Pollin for their
diligence in uncovering that inaccuracy?
What would the long-term effects on science be if the revelation
and correction of such inaccuracies were better rewarded, and the
perpetration of any scientific fraud involved more effective penalties?
I don't think an incentive like this would work very well. What you need is more people attempting to reproduce work, which means funding the attempt, not the result.
Consider whistle-blowing by grad students & post-docs -
should that be rewarded as increasing accuracy,
or punished as embarrassing to an institution?
Also, I mean science to include the (often better funded)
areas beyond academia - think medicine and health care.
funding the attempt, not the result.
1) Reproduces as expected (all real, or all faked?)
2) Reproduction fails (failed attempt, or not reproducible?)
3) Exposes fakery in previous experiments.
We should fund all of those, but reward 3) better afterward.