
Author on leave after Harvard inquiry - nice1
http://www.boston.com/news/education/higher/articles/2010/08/10/author_on_leave_after_harvard_inquiry/?p1=News_links
======
jacquesm
Wow, that's really quite the height from which to fall down.

I was surprised to learn that the peer review process did not include giving
the reviewers access to the raw data. Is that the way it is normally done?
What is the reason for that?

~~~
hga
Are you referring to this bit?

" _Gary Marcus, a psychology professor at New York University and one of the
co-authors of the paper, said he drafted the introduction and conclusions of
the paper, based on data that Hauser collected and analyzed._ "

As far as I can tell from just this article, raw data is available (he
supplied it to a skeptic for a disputed study in the '90s), it just wasn't
shared with the co-author for no doubt division of labor reasons.

It should also be noted that raw data is generally quite messy in format (lab
notebooks, recorder paper strips and the output of other machines, in this
case a set of videos, etc.). Releasing a copy, especially in a form that will
make sense to others (e.g. adding annotations or explaining your labeling
system) will take time and money you'd rather spend on doing more science, so
you only do it when necessary.

Climategate certainly told us that the sharing of raw data with peer reviewers
or skeptics is anything but routine or normal (they weren't called on the
carpet for merely refusing to do that); you might recall that one of the
things that cased it all to fall apart was when a member of clique published a
paper in a journal they didn't realize had a "supply your raw data"
requirement....

Raw data can most certainly be tampered with, see
<http://en.wikipedia.org/wiki/Thereza_Imanishi-Kari>, a case I know an awful
lot about due to overlapping social circles plus incidental contact with NIH's
"fraud busters" in the mid-90s. Here the Secret Service "inks and paper" types
were called in and found that some raw lab notebook level type data was
falsified after the fact, or at least this was reported in _Science_ in quite
a bit of detail.

The important real test is reproducibility. That's why there can be a lot of
fraudulent science without it hurting the general scientific endeavor: if it's
unimportant, no one will depend on it and it gets the "publish or perish" job
done. If it's important, people will try to build upon it, that won't work
(that's where Imanishi-Kari lost), they'll try to reproduce your work to see
how they're screwing up, and the fraud/mistake/whatever will get caught.

I am a bit surprised at how terrifically opaque Harvard's handling of this is.
As the journal editor said:

" _[...] he had not been told what specific errors had been made in the paper,
which is unusual. “Generally when a manuscript is withdrawn, in my experience
at any rate, we know a little more background than is actually published in
the retraction,’’ he said. “The data not supporting the findings is
ambiguous.’’"_

No kidding. This is not a satisfactory state of affairs, as is pointed out
elsewhere.

