Hacker News new | past | comments | ask | show | jobs | submit login

"The thing about the Schon case is this: he might have got away with all..."

But could Schon have gotten away with it? I somehow doubt it. I was going to mention the Schon case and you beat me to it. I recall when news of this case broke, it then led me to read some of his papers that contained the fabricated data. I've never fully understood why Schon faked the data or what he stood to gain by doing so.

First, I've not read Samuel Reich's account of the Schon case you mention, in fact I didn't know it existed or I definitely would have done so. It's now for me to follow that up.

It seems to me there are multiple reasons for why a researcher would fabricate data or commit scientific fraud including those you've mentioned. However, the problem I have is that I find it almost inconceivable that anyone who commits such fraud would actually believe that they could get away with doing so and not get caught.

If the perpetrator simply believed that the fraud would never be discovered during his/her career or lifetime - or perhaps never even after death and thus there'd never be a stain on his/her reputation then I'd reckon we ought to question how he/she became a researcher in the first instance. Such woolly thinking doesn't smack of having much intelligence.

Surely, every researcher knows that even if he/she managed to fool the peer review process with fabricated data (which is quite possible with new research that's not well understood) that this situation wouldn't remain so forever. Once published in widely circulated journals this fabricated data would live on for centuries and the fact that it's fraulent would eventually be discovered. Moreover, a researcher who intends to fabricate data would also know (or ought to know) that once errors in scientific papers are discovered it's usually not hard to distinguish between fabricated data and actual errors, mistakes, etc.

It's a long while since I've read some of Schon's papers but I recall that at the time it was clear even to me who hadn't worked as a researcher in that field that his results were just a little too clean and neat. Sure, by then I had the hindsight of knowing that parts of those papers contained fabricated data but even so it was pretty obvious. Thus, to my mind, the question remains why he undetook such risks with his career. (Perhaps Samuel Reich's book answers that so I'm looking forward to reading it.)

Incidentally, the matter of scientific fraud has never been far from my mind since a somewhat trivial event occurred to me during my student days - although it wasn't trivial at the time (I've recounted this previously). At the end of a chemistry lab experiment the tutor accused me of fabricating the results of a titration and that I adapted my work directly from the textbook, as he reckoned my results were too good to be true. I was absolutely furious and I insisted that he stay back with me in the lab over lunchtime whilst I repeated the titration which he did. The results were that several of the measurements were even closer to the theoretical than the earlier experiment.

That incident led me to not only question and be suspicious of my own data and experimental setups but also those of others. This brings me on to the matter of the replication crisis but I won't elaborate on that further here except to say that Schon must have been worried that others wouldn't be able to replicate his work, that is - as you've mentioned - unless he was pretty certain others would be able to do so, thus his motive.




> Sure, by then I had the hindsight of knowing that parts of those papers contained fabricated data but even so it was pretty obvious.

But this IS the crux of the problem. Chess has a similar situation--the forced checkmate is obvious after the fact. However, even chess masters miss forced checkmates up front at the table all the time. And that situation is far less subtle than research data.

> I was absolutely furious and I insisted that he stay back with me in the lab over lunchtime whilst I repeated the titration which he did.

I applaud him, but that's not good lab technique. Good lab technique is "Here's my lab notebook. You can see my data along with the handwritten timestamps. The procedure is also written in there. YOU will replicate this while I watch and correct YOUR lab technique." THAT'S good lab technique.

I had a physics lab as a freshman taught by an absolute tyrant of a professor. He would pull one notebook every lab and try to replicate your results. If he couldn't, you failed that lab. You had to record EVERYTHING in your lab notebook. Serial numbers of equipment, test and calibration procedure steps, data that you knew was invalid and record the reason why ... all in pen with timestamps with no erasures ever allowed.

It was brutal.

He was totally unapologetic. His stance: "If you can't get the same result three times, you don't even know that you're wrong let alone that you're right. I'm actually being generous in the lab--you only have to get the same result twice."


"I applaud him, but that's not good lab technique. Good lab technique is "Here's my lab notebook. You can see my data along with the handwritten timestamps."

I don't think he had much choice other than to say back because his comment was made at the end of the morning's lab session which ended at lunchtime. As was his wont, he walked around the lab near the end of the session looking at students' results.

It would not have been easy to 'prefabricate' results ahead of time for two reasons, the first is that whilst the type of experiment was known to students ahead of time the exact details such as the quantity of reagents etc. were not; second, lab work notes required not only time and date but also the lab temperature, atmospheric pressure and humidity to be entered at the beginning of the work (and occasionally these were relevant and had to be taken into account). Students acquired this info from instruments, barometer etc., on the lab wall near the entrance (we usually wasted about 5 mins crowded around whilst each student recorded the details separately). BTW, these were analog instruments, wet and dry bulb etc., thus the measurements consumed noticeable time.

For brevity's sake what I didn't mention was that there were two of us involved in the experiment, thus, by necessity, we both had to have the same results. As was the practice, students did lab work in pairs (each bench had room for two students) so both of us were implicated (although I was the one he addressed). The titration was only part of a larger experiment, what he implied was that the data points on our hand-drawn graphs (done during the lab work) which showed various stages of the titration didn't come from the experiment itself but that we'd inferred (interpolated) them from textbook info that we'd have known ahead of time. In theory, he was of course correct, knowing the textbook theory it wouldn't have been that difficult to interpolate the results on-the-fly. As two of us were involved, by implication he also implied collusion (although he didn't say so).

All I can say is that his mind was more devious than ours. For starters, neither of us was that organized and looking at a list of the work we'd be doing each lab session that had been issued months before wasn't something that I did (again, such dedication eluded me—too many other distractions). The other point was that whilst accuracy in one's lab work was important, it's not as if we were awarded marks on accuracy, what mattered was actual attendance at the lab and successful completion of the experiment—so planning ahead like he implied would have been the last thing on our minds.

"You had to record EVERYTHING in your lab notebook. Serial numbers of equipment, test and calibration procedure steps, data that you knew was invalid and record the reason why ... all in pen with timestamps with no erasures ever allowed."

Right, there's nothing wrong with this and getting students into the habit of calibrating their setups/experimental equipment is absolutely important. In fact, for me this started in science from the first year in high school and followed on at university. Calibration is still ingrained on my mind (later in life I was involved in standards work so such thinking comes naturally). That said, the way of policing lab work is for the instructor to walk around during the experiments and ensure that such recordings are done on the spot. As mentioned, when I was doing lab practical the emphasis was on a successful outcome on the day—results later on weren't that important, actually learning to do it properly was what mattered.

""If you can't get the same result three times, you don't even know that you're wrong let alone that you're right.""

There's a lot to that which makes sense, however, what we were taught was that you'd be unlikely to get exactly the same result each time because of experimental error. Getting the same identical result wasn't as important as checking one's technique each time and then correlating each set of results thereafter. In fact, we were warned that if one got identical results then to be suspicious—some equipment, setup etc. may not have been properly reset from the previous run of experiments, etc. What was more important, beginning even in first year of high school science, was to process one's results statistically. That meant summing residuals etc. using stats—nth root n-type stuff. Various statistical techniques were applied in both physics and chemistry and applied right through my training.


The book is pretty illuminating, but at times is kind of grim reading for anyone with past academic experience. What really sank Schon was the failure to replicate his work, but there were a lot of grad students and postdocs who wasted several years attempting to do so, ending up with nothing to show for it. Even so, there was a reluctance to admit that fraud had taken place. From the book:

Following the exposure of a major scientific fraud, senior scientists sometimes voice concern about the possibility of harm to the public image of their field. News reports and transparent investigations sometimes seem to be making the problem worse. But in practice, the scientists most seriously affected by Hendrik Schon's claims were more affected by the reality of fraud than the bad appearance that followed its exposure.

Leading journals like Science and Nature are also reluctant to withdraw papers for similar image reasons, and Schon was a consummate con artist: "He knew it was important to be friendly towards reviewers, and to thank them generously for their valuable comments. He was not to shy to solicit editors politely for their support." He also would go to senior scientists, ask for their opinion on what kind of data would be expected if his imaginary devices worked, and then fabricate data according to their estimates.

Initially, there was also a major pushback against fraud claims by Bell Labs managers, up to giving poor performance reviews to some of Schon's co-workers who questioned his behavior on the basis of them 'not being good team players'.

So, yes, if people had managed to replicate his work, even partially, he'd likely have gotten away with it, and would now be some top scientist in some prestigious institution.


Thanks for that info. As mentioned, it's been a long time since I looked at the Schön case in any serious way. I made my earlier comment from my smartphone, this one is from my PC as I recall archiving a number of documents on disk about the case years back, so I went looking for them here (it also accounts for why Schön is now spelt correctly—the PC speller insists his name includes the umlaut).

So far, the most relevant document I've turned up from my archives is Report Of The Investigation Committee On The Possibility Of Scientific Misconduct In The Work Of Hendrik Schön And Coauthors, September 2002 - Lucent Technologies† Malcolm R. Beasley et al (aka, Bell Labs Beasley Report), [PDF]. Although not immediately to hand, I know that I'll still have physical copies of Schön's Science articles as I've kept my old issues of this and other scientific journals.

It's all slowly coming back to me now—when examining Schön's papers (as published in say Science, Nature, Physical Review, etc,) one could also have a running commentary on what was wrong with them if one read them in conjunction with the Beasley Report. It provided comprehensive instances of the fraud from the published documents. In effect, Beasley provided an annotated guide to Schön's papers.

As I've just been reminded, the Beasley Report is very detailed, it is 129 pages long including appendices one of which includes a table that lists 25 papers by Schön and cohorts that the Beasley Committee investigated. Schön is the only author to appear on all 25 papers and the type of fraud is listed for each paper, these include: 'Data Substitution', 'Unrealistic Precision', 'Contradictory Physics', 'Unusually Good Results', and 'Unusual Fabrication or Procedures'.

Presumably I'm not providing you with any additional information here as I'd expect that Samuel Reich would have detailed all this and more in his book.

In addition to the aforementioned chemistry incident the reason that I took an unusual interest in the Schön case was that I'd had reasonable experience in working with field effect transistors, so when the news of his research fraud broke the subject matter piqued my interest. Also, my professional experience in electronics made the Beasley Report easy to understand.

Reckon you're right about leading journals like Science, Nature etc. being reluctant to withdraw papers for image reasons. In hindsight that's now very clear. In the long run, the failure of those running the scientific journals and the scientific community generally to get fully on top of the Schön matter and other instances of scientific fraud has turned out to be very detrimental for science and scientific research.

Even 'soft' scientific fraud, such as the never-ending exaggerated claims made by research teams about the effectiveness of their research (usually for the purposes of obtaining funding and general PR), has dramatically compounded the problem. Together, they have had a serious and detrimental effect on the way many of the population perceive science in that over recent decades huge swathes of them have turned away from or have been turned off science, many no longer believe what scientists say and or that they dismiss their comments with a gain of salt.

Take the instance of climate change alone (there are many more). We only have to look at the millions of its skeptics and their widespread disbelief in climate science to know that they hold little respect for the subject, the same also holds for much of other science. To make matters worse, many now hold such disrespect and contempt for science and scientific institutions at levels that border on zealotry—that is, they hold attitudes of distaste for science that are more akin to the hatred and furor one often observes between warring religious groups.

There is little doubt that science no longer commands the very high respect from the population that it once did decades ago. Whilst no doubt there are many reasons, cultural and otherwise, that have contributed and combined to produce this downturn in science's popularity, it is nevertheless clear that exaggerated claims and mixed messages that have come from scientists and technocrats over the past 50 or so years have been largely responsible for creating these negative attitudes.

One only has to look at cancer research to witness the problem. If every instance of the many, many thousands of optimistic pronouncements about cancer made by researchers since WWII had actually contributed to curing the disease even to, say, the tiny extent of just 0.1% improvement per pronouncement then the disease would have been wiped out many years ago. Yes, whilst researchers have made progress over those 70-plus years, cancer is still the second biggest killer of humankind—and the population is only too well aware of the fact—and also the fact that scientists have failed to live up to their promises in that they've failed to deliver a cure. It's little wonder that the consequences of such skepticism and disbelief have morphed into other areas of health management—such as the failure of many to heed important messages about COVID; the unreasonable fear and loathing of chemicals—even benign ones—and the chemical industry per se, — views that are held by a huge percentage of the population—and so on, and so on.

The irrational fear of chemicals among so many of the population alone illustrates the absolute abject failure of science education.

It seems to me that science needs to completely rebrand itself, and it needs to begin with not making promises that it cannot keep—if in doubt science should say absolutely nothing. Clearly, part of that rebranding ought to be aimed at cleaning up all aspects of scientific fraud, not just extreme instances such as the Schön matter. Scientific fraud has been with us for eons and it will probably always be so but we're now long past the days of Charles Dawson's Piltdown man-type hoaxes, which, incidentally, took over 40 years to conclude conclusively that it was a fraud. We now have multiple techniques for detecting fraud such as Benford's Law (aka First Digit Law), etc., these need to be combined with truly effective policies to ensure that scientific ethics and standards are maintained—if necessary, even to the extent of making them an adjunct or an addition to the Scientific Method if it would ensure that scientific research would become more honest and ethical. No doubt, in the next few years we will also see a huge enhancement to the detection of scientific fraud when AI begins to scan millions of existing research papers. No doubt it will find many more past instances of fraud.

Finally, if you think my comments about science's falling out of favor and grace with a significantly large part of the population is grossly exaggerated then it's worth spending eleven minutes on the Internet Archive viewing the short film Why Study Science? This rather corny documentary from 1955 was made to encourage US high school kids to take a positive interest in science and the study thereof. However, what's truly relevant about the film for us today is that it oozes with the then existing ethos that everyone in society was interested in and or understood the importance of science no matter what their profession was or what other values they held: https://archive.org/details/WhyStudy1955_2.

(In my opinion, this film ought to get a wider coverage than it has at present if for no other reason than it gives us a clear reference point from which to measure changes in society's attitude towards science.)

There is no missing the fact that this documentary conveys par excellence the zeitgeist of the mid 1950s. It clearly shows that back then society did value science in ways and to the extent that we have not seen exhibited for many decades. Having an appreciation for science was the de facto ethos of those days, which may seem surprising given that the 1950s was also the peak period for nuclear weapons development. Despite that, people's faith in the value of science and scientific research didn't waver.

Furthermore, I can attest that this positive attitude towards science still prevailed with just about everyone that I came in contact with a decade or so later when it became my turn to study science in high school. This positive attitude towards science wasn't a matter that most people even consciously thought about let alone questioned—as it was the accepted norm.

It wasn't for another decade or so in the late 1970s to mid 1980s that the anti-science rot began to set in and take hold.

____

† BTW, I've just done an online search for the Beasley Report and some websites are still hosting it, here's the first one that I came across: https://w.astro.berkeley.edu/~kalas/ethics/documents/schoen....


Interesting, thanks for the write-up. The whole story really is too bad, I paid a lot of attention to it because I was working a bit with photoactive proteins and light-absorbing organic dyes around that time, and there was a similar case of rather fraudulent behavior by some of the people I was unfortunate to be working with, which led to my being ejected from the program after confronting the PI about it... turned out most of the department knew he'd been cooking his data for years. I was so pissed off at the time I sent a complaint to something called 'The Office of Research Integrity' who replied with a note that they weren't going to look into it. Rather soured me on the academic enterprise as practiced today, but I also knew of top-notch researchers who didn't engage in anything like that, and even had procedures in place to prevent it. Key element: lab notebook discipline is very poor among the fraudsters, they 'lose samples' and so on.


"...most of the department knew he'd been cooking his data for years."

This is the depressing part of it. Systemic corruption and people turning a blind eye. If one's just a cog in the system and not the top brass it's much easier to ignore the problem and plod on regardless. If it's too much for one's conscience and one turns whistleblower one's status within the organization usually changes and one is usually perceived to be a 'leper' by coworkers - even by those who are not engaged in any nefarious activity. In effect, the whole organization coalesces together and acts like a single organism trying to protect itself, the wistleblower being perceived as an internal threat. I know, I've been in that situation and it's not very nice. Moreover, it's certainly not the best career move.

In many organizations the top brass as well as branch/departmental managers etc. arrive at their positions via the Peter Principle, and those who are promoted this way are usually smart enough to know it. Even if they aren't corrupt they know that a wistleblower stands to disrupt the organization in a way that could likely threaten their position, hence their ambivalence about fixing the problem. (Presumably something similar happened at Bell.)

Unfortunately, wistleblowing often doesn't cure the problem in the long run. There is however one unexpected side effect, which is one quickly learns who one's true friends are and those who have real integrity. What's surprising is that they often turn out to be those who one would least expect.

"lab notebook discipline is very poor among the fraudsters, they 'lose samples' and so on"

I have little doubt about that, especially if documentation is written up long after the event. There's a word of caution here though, I've sometimes rewritten notes after the event because my scrawl is almost illegible. As a result rewrites can look too good and thus appear suss. My solution is to always keep the scrawl no matter how bad it looks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: