
Use of 'language of deceit' betrays scientific fraud - Libertatea
http://www.newscientist.com/article/dn26127-use-of-language-of-deceit-betrays-scientific-fraud.html?cmpid=RSS%7CNSNS%7C2012-GLOBAL%7Conline-news#.VAH4ccdWgR0
======
rflrob
It seems to me that every researcher will have a different level of
'amplifiers' in their writing, so this will be a really hard classifier to
train generally.

------
shawn-furyan
A 30% error rate is far, far too large for this to be useful as a first line
filter. Even error rates of less than 1/3 of that (10%) lead to combinatorial
effects that make for very unreliable tests. At best, this would be a
corroborating test, and not a very strong one at that.

~~~
rflrob
What I, if I were a reviewer on this paper, would have liked to have seen is a
ROC curve—basically, if you tweak the cut offs, can you get fewer false
positives at the expense of more false negatives. Assuming fraud in science is
rare (I think even the gloom-and-doom-iest reports put it under 20% for any
kind of research misconduct, not just fraud), even a moderately low false
positive rate will flag a lot more papers than a perfect true positive rate.

------
lutusp
A quote: "Diederik Stapel, the infamous "lying Dutchman" who in 2011 admitted
to inventing the data in dozens of psychology research papers, unwittingly
signalled his deceit through the language he used. As well as inflating the
certainty surrounding his results, Stapel included more science-related terms
to describe his methods when writing up his fraudulent "findings" than when
describing genuine results.

"Researchers who have analysed Stapel's papers say they can separate his
genuine research from the fictional with about 70 per cent accuracy. Now they
are studying a larger sample of papers from many different scientific
fraudsters, to see if the detection method works more generally."

Just think -- if psychology were a science, researchers could use scientific
tools to detect fraud -- tests that were not conducted, lab measurements that
were fudged, images manipulated to create a false impression, as happens in
real scientific fields.

In extreme cases, we could detect fraud by simply repeating the experiments,
using the clear protocols published along with the results. But in psychology,
investigators are completely dependent on what people say, not what they do.
If Stapel were in a real scientific field, he would never have gotten away
with his many verbal frauds, and fraud detection would be child's play.

Another quote: "Stapel, who worked at Tilburg University in the Netherlands,
used more "amplifiers" – words like "profoundly" and "extreme" – in his
fraudulent papers, and fewer "diminishers" – like "merely" and "somewhat"."

Imagine a real scientist trying to defraud his university and granting
agencies by using words like "profoundly" and "extreme" about his lab results.
But in science, such descriptions would make no difference -- the lab results,
_the evidence_ , would decide the issue.

It's important to understand that in psychology, because of its status as a
pseudoscience, how you describe your work is more important than the outcome
of experiments, and one can cheat with words. In a real science, Diederik
Stapel would have been exposed within days of entering a laboratory.

~~~
DanBC
> In a real science, Diederik Stapel would have been exposed within days of
> entering a laboratory.

You ignore the fact that scientific fraud exists in other sciences; and that
this article talks about extending the use of this tool to different sciences.

~~~
lutusp
> You ignore the fact that scientific fraud exists in other sciences

It's not the topic under discussion, but scientific fraud certainly exists
among scientists, with one important difference -- real scientists are exposed
by comparing their work, their evidence, to reality. Stapel was exposed by
comparing his words to his work -- work often not even conducted. There was
never any serious prospect of comparing Stapel's work to reality, because his
work didn't try to address reality -- that's science's purview.

In science, reliant on physical evidence, how you describe the evidence
shouldn't matter, the evidence should speak for itself. Reliable evidence
should bring different, similarly equipped observers to the same conclusion.
In psychology, the reason there's no consensus on the meaning of evidence is
because the evidence is extremely poor, freeing different psychologists to
draw different conclusions from the same evidence.

> this article talks about extending the use of this tool to different
> sciences.

Yes, that's true, they do talk about it. But until they actually try to apply
it, it's not a legitimate topic of discussion.

You haven't asked an obvious question -- where are the falsifiable
psychological theories, theories that can be proven false in practical tests,
but that resist falsification? Where are the time-tested theories that would
make psychology one field, like physics or biology, and that would justify
clinical practice?

