Hacker News new | past | comments | ask | show | jobs | submit login
Scientific integrity and the ethics of 'utter honesty' (mitpress.mit.edu)
52 points by morkin on April 29, 2022 | hide | past | favorite | 33 comments



The best autopsy of Jan Hendrik Schon case is "Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World (2009)", by Eugenie Samuel Reich. It explains a lot of the questions asked in this article, such as "how 16 fraudulent papers could have made it through the peer review process of the two premier science journals in the world".

The thing about the Schon case is this: he might have got away with all of it if it had turned out that such electronic devices made with organic plastic crytals had actually been capable of being created with that era's technology. Imagine a scientist who guesses about the outcome of a particular experiment, but just isn't good enough to get the data - so they engage in fabrication of data, so that they can be 'first to publish' and hence get the credit. Others come along with more skill and essentially replicate the fraudulent results. The original fraudster gets the credit for the discovery.

The actual practical result of that case, by the way, was that now anyone publishing in that field has to submit electron microscopy images with their report as proof that they actually did fabricate some kind of device, rather than just inventing it all. Schon didn't do that, but now everyone has to. That's progress, at least.

As far as this quote: "Sometimes doing the right thing may mean not being completely honest in a larger social setting in order to prevent a great harm."

How does that work out in practice if you're a researcher in an academic department and discover your colleague down the hall is fabricating data in this manner? Most won't say a word, and their rationale is that (1) they'll be in the position of being the whistleblower, with all that entails, and (2) the fallout will make their institution look really bad and might hurt the ability of other researchers in that department to get grants. So it's not implausible that everyone in the department knows of the fraud, but they ignore it for years in the name of 'being good colleagues.'


"The thing about the Schon case is this: he might have got away with all..."

But could Schon have gotten away with it? I somehow doubt it. I was going to mention the Schon case and you beat me to it. I recall when news of this case broke, it then led me to read some of his papers that contained the fabricated data. I've never fully understood why Schon faked the data or what he stood to gain by doing so.

First, I've not read Samuel Reich's account of the Schon case you mention, in fact I didn't know it existed or I definitely would have done so. It's now for me to follow that up.

It seems to me there are multiple reasons for why a researcher would fabricate data or commit scientific fraud including those you've mentioned. However, the problem I have is that I find it almost inconceivable that anyone who commits such fraud would actually believe that they could get away with doing so and not get caught.

If the perpetrator simply believed that the fraud would never be discovered during his/her career or lifetime - or perhaps never even after death and thus there'd never be a stain on his/her reputation then I'd reckon we ought to question how he/she became a researcher in the first instance. Such woolly thinking doesn't smack of having much intelligence.

Surely, every researcher knows that even if he/she managed to fool the peer review process with fabricated data (which is quite possible with new research that's not well understood) that this situation wouldn't remain so forever. Once published in widely circulated journals this fabricated data would live on for centuries and the fact that it's fraulent would eventually be discovered. Moreover, a researcher who intends to fabricate data would also know (or ought to know) that once errors in scientific papers are discovered it's usually not hard to distinguish between fabricated data and actual errors, mistakes, etc.

It's a long while since I've read some of Schon's papers but I recall that at the time it was clear even to me who hadn't worked as a researcher in that field that his results were just a little too clean and neat. Sure, by then I had the hindsight of knowing that parts of those papers contained fabricated data but even so it was pretty obvious. Thus, to my mind, the question remains why he undetook such risks with his career. (Perhaps Samuel Reich's book answers that so I'm looking forward to reading it.)

Incidentally, the matter of scientific fraud has never been far from my mind since a somewhat trivial event occurred to me during my student days - although it wasn't trivial at the time (I've recounted this previously). At the end of a chemistry lab experiment the tutor accused me of fabricating the results of a titration and that I adapted my work directly from the textbook, as he reckoned my results were too good to be true. I was absolutely furious and I insisted that he stay back with me in the lab over lunchtime whilst I repeated the titration which he did. The results were that several of the measurements were even closer to the theoretical than the earlier experiment.

That incident led me to not only question and be suspicious of my own data and experimental setups but also those of others. This brings me on to the matter of the replication crisis but I won't elaborate on that further here except to say that Schon must have been worried that others wouldn't be able to replicate his work, that is - as you've mentioned - unless he was pretty certain others would be able to do so, thus his motive.


> Sure, by then I had the hindsight of knowing that parts of those papers contained fabricated data but even so it was pretty obvious.

But this IS the crux of the problem. Chess has a similar situation--the forced checkmate is obvious after the fact. However, even chess masters miss forced checkmates up front at the table all the time. And that situation is far less subtle than research data.

> I was absolutely furious and I insisted that he stay back with me in the lab over lunchtime whilst I repeated the titration which he did.

I applaud him, but that's not good lab technique. Good lab technique is "Here's my lab notebook. You can see my data along with the handwritten timestamps. The procedure is also written in there. YOU will replicate this while I watch and correct YOUR lab technique." THAT'S good lab technique.

I had a physics lab as a freshman taught by an absolute tyrant of a professor. He would pull one notebook every lab and try to replicate your results. If he couldn't, you failed that lab. You had to record EVERYTHING in your lab notebook. Serial numbers of equipment, test and calibration procedure steps, data that you knew was invalid and record the reason why ... all in pen with timestamps with no erasures ever allowed.

It was brutal.

He was totally unapologetic. His stance: "If you can't get the same result three times, you don't even know that you're wrong let alone that you're right. I'm actually being generous in the lab--you only have to get the same result twice."


"I applaud him, but that's not good lab technique. Good lab technique is "Here's my lab notebook. You can see my data along with the handwritten timestamps."

I don't think he had much choice other than to say back because his comment was made at the end of the morning's lab session which ended at lunchtime. As was his wont, he walked around the lab near the end of the session looking at students' results.

It would not have been easy to 'prefabricate' results ahead of time for two reasons, the first is that whilst the type of experiment was known to students ahead of time the exact details such as the quantity of reagents etc. were not; second, lab work notes required not only time and date but also the lab temperature, atmospheric pressure and humidity to be entered at the beginning of the work (and occasionally these were relevant and had to be taken into account). Students acquired this info from instruments, barometer etc., on the lab wall near the entrance (we usually wasted about 5 mins crowded around whilst each student recorded the details separately). BTW, these were analog instruments, wet and dry bulb etc., thus the measurements consumed noticeable time.

For brevity's sake what I didn't mention was that there were two of us involved in the experiment, thus, by necessity, we both had to have the same results. As was the practice, students did lab work in pairs (each bench had room for two students) so both of us were implicated (although I was the one he addressed). The titration was only part of a larger experiment, what he implied was that the data points on our hand-drawn graphs (done during the lab work) which showed various stages of the titration didn't come from the experiment itself but that we'd inferred (interpolated) them from textbook info that we'd have known ahead of time. In theory, he was of course correct, knowing the textbook theory it wouldn't have been that difficult to interpolate the results on-the-fly. As two of us were involved, by implication he also implied collusion (although he didn't say so).

All I can say is that his mind was more devious than ours. For starters, neither of us was that organized and looking at a list of the work we'd be doing each lab session that had been issued months before wasn't something that I did (again, such dedication eluded me—too many other distractions). The other point was that whilst accuracy in one's lab work was important, it's not as if we were awarded marks on accuracy, what mattered was actual attendance at the lab and successful completion of the experiment—so planning ahead like he implied would have been the last thing on our minds.

"You had to record EVERYTHING in your lab notebook. Serial numbers of equipment, test and calibration procedure steps, data that you knew was invalid and record the reason why ... all in pen with timestamps with no erasures ever allowed."

Right, there's nothing wrong with this and getting students into the habit of calibrating their setups/experimental equipment is absolutely important. In fact, for me this started in science from the first year in high school and followed on at university. Calibration is still ingrained on my mind (later in life I was involved in standards work so such thinking comes naturally). That said, the way of policing lab work is for the instructor to walk around during the experiments and ensure that such recordings are done on the spot. As mentioned, when I was doing lab practical the emphasis was on a successful outcome on the day—results later on weren't that important, actually learning to do it properly was what mattered.

""If you can't get the same result three times, you don't even know that you're wrong let alone that you're right.""

There's a lot to that which makes sense, however, what we were taught was that you'd be unlikely to get exactly the same result each time because of experimental error. Getting the same identical result wasn't as important as checking one's technique each time and then correlating each set of results thereafter. In fact, we were warned that if one got identical results then to be suspicious—some equipment, setup etc. may not have been properly reset from the previous run of experiments, etc. What was more important, beginning even in first year of high school science, was to process one's results statistically. That meant summing residuals etc. using stats—nth root n-type stuff. Various statistical techniques were applied in both physics and chemistry and applied right through my training.


The book is pretty illuminating, but at times is kind of grim reading for anyone with past academic experience. What really sank Schon was the failure to replicate his work, but there were a lot of grad students and postdocs who wasted several years attempting to do so, ending up with nothing to show for it. Even so, there was a reluctance to admit that fraud had taken place. From the book:

Following the exposure of a major scientific fraud, senior scientists sometimes voice concern about the possibility of harm to the public image of their field. News reports and transparent investigations sometimes seem to be making the problem worse. But in practice, the scientists most seriously affected by Hendrik Schon's claims were more affected by the reality of fraud than the bad appearance that followed its exposure.

Leading journals like Science and Nature are also reluctant to withdraw papers for similar image reasons, and Schon was a consummate con artist: "He knew it was important to be friendly towards reviewers, and to thank them generously for their valuable comments. He was not to shy to solicit editors politely for their support." He also would go to senior scientists, ask for their opinion on what kind of data would be expected if his imaginary devices worked, and then fabricate data according to their estimates.

Initially, there was also a major pushback against fraud claims by Bell Labs managers, up to giving poor performance reviews to some of Schon's co-workers who questioned his behavior on the basis of them 'not being good team players'.

So, yes, if people had managed to replicate his work, even partially, he'd likely have gotten away with it, and would now be some top scientist in some prestigious institution.


Thanks for that info. As mentioned, it's been a long time since I looked at the Schön case in any serious way. I made my earlier comment from my smartphone, this one is from my PC as I recall archiving a number of documents on disk about the case years back, so I went looking for them here (it also accounts for why Schön is now spelt correctly—the PC speller insists his name includes the umlaut).

So far, the most relevant document I've turned up from my archives is Report Of The Investigation Committee On The Possibility Of Scientific Misconduct In The Work Of Hendrik Schön And Coauthors, September 2002 - Lucent Technologies† Malcolm R. Beasley et al (aka, Bell Labs Beasley Report), [PDF]. Although not immediately to hand, I know that I'll still have physical copies of Schön's Science articles as I've kept my old issues of this and other scientific journals.

It's all slowly coming back to me now—when examining Schön's papers (as published in say Science, Nature, Physical Review, etc,) one could also have a running commentary on what was wrong with them if one read them in conjunction with the Beasley Report. It provided comprehensive instances of the fraud from the published documents. In effect, Beasley provided an annotated guide to Schön's papers.

As I've just been reminded, the Beasley Report is very detailed, it is 129 pages long including appendices one of which includes a table that lists 25 papers by Schön and cohorts that the Beasley Committee investigated. Schön is the only author to appear on all 25 papers and the type of fraud is listed for each paper, these include: 'Data Substitution', 'Unrealistic Precision', 'Contradictory Physics', 'Unusually Good Results', and 'Unusual Fabrication or Procedures'.

Presumably I'm not providing you with any additional information here as I'd expect that Samuel Reich would have detailed all this and more in his book.

In addition to the aforementioned chemistry incident the reason that I took an unusual interest in the Schön case was that I'd had reasonable experience in working with field effect transistors, so when the news of his research fraud broke the subject matter piqued my interest. Also, my professional experience in electronics made the Beasley Report easy to understand.

Reckon you're right about leading journals like Science, Nature etc. being reluctant to withdraw papers for image reasons. In hindsight that's now very clear. In the long run, the failure of those running the scientific journals and the scientific community generally to get fully on top of the Schön matter and other instances of scientific fraud has turned out to be very detrimental for science and scientific research.

Even 'soft' scientific fraud, such as the never-ending exaggerated claims made by research teams about the effectiveness of their research (usually for the purposes of obtaining funding and general PR), has dramatically compounded the problem. Together, they have had a serious and detrimental effect on the way many of the population perceive science in that over recent decades huge swathes of them have turned away from or have been turned off science, many no longer believe what scientists say and or that they dismiss their comments with a gain of salt.

Take the instance of climate change alone (there are many more). We only have to look at the millions of its skeptics and their widespread disbelief in climate science to know that they hold little respect for the subject, the same also holds for much of other science. To make matters worse, many now hold such disrespect and contempt for science and scientific institutions at levels that border on zealotry—that is, they hold attitudes of distaste for science that are more akin to the hatred and furor one often observes between warring religious groups.

There is little doubt that science no longer commands the very high respect from the population that it once did decades ago. Whilst no doubt there are many reasons, cultural and otherwise, that have contributed and combined to produce this downturn in science's popularity, it is nevertheless clear that exaggerated claims and mixed messages that have come from scientists and technocrats over the past 50 or so years have been largely responsible for creating these negative attitudes.

One only has to look at cancer research to witness the problem. If every instance of the many, many thousands of optimistic pronouncements about cancer made by researchers since WWII had actually contributed to curing the disease even to, say, the tiny extent of just 0.1% improvement per pronouncement then the disease would have been wiped out many years ago. Yes, whilst researchers have made progress over those 70-plus years, cancer is still the second biggest killer of humankind—and the population is only too well aware of the fact—and also the fact that scientists have failed to live up to their promises in that they've failed to deliver a cure. It's little wonder that the consequences of such skepticism and disbelief have morphed into other areas of health management—such as the failure of many to heed important messages about COVID; the unreasonable fear and loathing of chemicals—even benign ones—and the chemical industry per se, — views that are held by a huge percentage of the population—and so on, and so on.

The irrational fear of chemicals among so many of the population alone illustrates the absolute abject failure of science education.

It seems to me that science needs to completely rebrand itself, and it needs to begin with not making promises that it cannot keep—if in doubt science should say absolutely nothing. Clearly, part of that rebranding ought to be aimed at cleaning up all aspects of scientific fraud, not just extreme instances such as the Schön matter. Scientific fraud has been with us for eons and it will probably always be so but we're now long past the days of Charles Dawson's Piltdown man-type hoaxes, which, incidentally, took over 40 years to conclude conclusively that it was a fraud. We now have multiple techniques for detecting fraud such as Benford's Law (aka First Digit Law), etc., these need to be combined with truly effective policies to ensure that scientific ethics and standards are maintained—if necessary, even to the extent of making them an adjunct or an addition to the Scientific Method if it would ensure that scientific research would become more honest and ethical. No doubt, in the next few years we will also see a huge enhancement to the detection of scientific fraud when AI begins to scan millions of existing research papers. No doubt it will find many more past instances of fraud.

Finally, if you think my comments about science's falling out of favor and grace with a significantly large part of the population is grossly exaggerated then it's worth spending eleven minutes on the Internet Archive viewing the short film Why Study Science? This rather corny documentary from 1955 was made to encourage US high school kids to take a positive interest in science and the study thereof. However, what's truly relevant about the film for us today is that it oozes with the then existing ethos that everyone in society was interested in and or understood the importance of science no matter what their profession was or what other values they held: https://archive.org/details/WhyStudy1955_2.

(In my opinion, this film ought to get a wider coverage than it has at present if for no other reason than it gives us a clear reference point from which to measure changes in society's attitude towards science.)

There is no missing the fact that this documentary conveys par excellence the zeitgeist of the mid 1950s. It clearly shows that back then society did value science in ways and to the extent that we have not seen exhibited for many decades. Having an appreciation for science was the de facto ethos of those days, which may seem surprising given that the 1950s was also the peak period for nuclear weapons development. Despite that, people's faith in the value of science and scientific research didn't waver.

Furthermore, I can attest that this positive attitude towards science still prevailed with just about everyone that I came in contact with a decade or so later when it became my turn to study science in high school. This positive attitude towards science wasn't a matter that most people even consciously thought about let alone questioned—as it was the accepted norm.

It wasn't for another decade or so in the late 1970s to mid 1980s that the anti-science rot began to set in and take hold.

____

† BTW, I've just done an online search for the Beasley Report and some websites are still hosting it, here's the first one that I came across: https://w.astro.berkeley.edu/~kalas/ethics/documents/schoen....


Interesting, thanks for the write-up. The whole story really is too bad, I paid a lot of attention to it because I was working a bit with photoactive proteins and light-absorbing organic dyes around that time, and there was a similar case of rather fraudulent behavior by some of the people I was unfortunate to be working with, which led to my being ejected from the program after confronting the PI about it... turned out most of the department knew he'd been cooking his data for years. I was so pissed off at the time I sent a complaint to something called 'The Office of Research Integrity' who replied with a note that they weren't going to look into it. Rather soured me on the academic enterprise as practiced today, but I also knew of top-notch researchers who didn't engage in anything like that, and even had procedures in place to prevent it. Key element: lab notebook discipline is very poor among the fraudsters, they 'lose samples' and so on.


"...most of the department knew he'd been cooking his data for years."

This is the depressing part of it. Systemic corruption and people turning a blind eye. If one's just a cog in the system and not the top brass it's much easier to ignore the problem and plod on regardless. If it's too much for one's conscience and one turns whistleblower one's status within the organization usually changes and one is usually perceived to be a 'leper' by coworkers - even by those who are not engaged in any nefarious activity. In effect, the whole organization coalesces together and acts like a single organism trying to protect itself, the wistleblower being perceived as an internal threat. I know, I've been in that situation and it's not very nice. Moreover, it's certainly not the best career move.

In many organizations the top brass as well as branch/departmental managers etc. arrive at their positions via the Peter Principle, and those who are promoted this way are usually smart enough to know it. Even if they aren't corrupt they know that a wistleblower stands to disrupt the organization in a way that could likely threaten their position, hence their ambivalence about fixing the problem. (Presumably something similar happened at Bell.)

Unfortunately, wistleblowing often doesn't cure the problem in the long run. There is however one unexpected side effect, which is one quickly learns who one's true friends are and those who have real integrity. What's surprising is that they often turn out to be those who one would least expect.

"lab notebook discipline is very poor among the fraudsters, they 'lose samples' and so on"

I have little doubt about that, especially if documentation is written up long after the event. There's a word of caution here though, I've sometimes rewritten notes after the event because my scrawl is almost illegible. As a result rewrites can look too good and thus appear suss. My solution is to always keep the scrawl no matter how bad it looks.


I think the most important quotes from Feynman's address are the ones about being fooled, such as this one:

> The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that. [0]

[0] https://calteches.library.caltech.edu/51/2/CargoCult.htm

There are assumptions at the root of all human endeavors. We can't replicate all the experiments ourselves, so we must assume our forebears were on the right track. But there's no good process for fixing science-mistakes, when the science-mistake becomes sacrosanct ("regarded as too important or valuable to be interfered with"). For example, Feynman discusses Millikan's falling oil drop experiment, and how hard it was to replace the Millikan's not-quite-correct-but-published result.

You can have integrity and 'utter honesty' but still be wrong if your starting assumptions are wrong.


"> The first principle is that you must not fool yourself—"

Good post. From my experience, it's pretty easy to fool oneself (it's happened to me too many times).

It's very easy for one to have preconceived notions or ideas about something and as such it doesn't take much evidence if any for one to be convinced that one's ideas about them are correct even if factually they're to the contrary.

Moreover, precovinced ideas and notions often stop one from reviewing existing evidence about them that one already has in one's possession. When one's without supporting evidence then the very nature of that preconception may stop one from seeking out new or additional information about them.

Essentially, one's entrenched views that one hasn't come by through a process of substantial and rigorous thought one ought to always question.

The trouble is it's easy to say that, to keep the notion ever present at the forefront of one's mind is damn hard.


I think this article kind of misses the mark? There is a big gulf between reporting other explanations and tests for the phenomena you're studying and divulging information that you believe is dangerous.

A journal of bad results would be incredible. You could have one for novel or interesting failures as the article posits, and you could have one that catalogued and categorized more mundane failures - if its not an interesting failure you probably don't need to publish all the particulars, just make your findings available for inspection. I think that kind of information condensed down could be very useful.


"Would anyone really care to subscribe to, let alone publish in, the American Journal of Discarded Hypotheses, the Annals of Failed Experiments, or PLOS Dumpster?"

I have often half-joked about the Journal of Cautionary Tales, and would genuinely find "This particular study went sideways, and here's why..." quite valuable. And also, allowing researchers to still get "credit" for contributing to the knowledge of the doing of science.


Yes, but it needs a much more catchy name.

Many people subscribe to "weekly videos of random stuff exploding" on YouTube.


> "This particular study went sideways, and here's why..."

You often can't tell if it went sideways because your hypothesis was shite, or your experiment was.

And when you can guess that your experiment was, you often don't know why.


But you often can.

All journals are selective. This would select for "We know why, and there's practical knowledge to be gained from the why."


Sure, but that is a great reason to publish it. It allows other in the community to pick up on where things left off. Maybe someone else has the missing pieces to the puzzle.


We've talked about founding the Journal of the Null Hypothesis, which publishes well-designed experiments which reveal a particular hypothesis to be wrong or insignificant.


There's always RISKS digest: https://catless.ncl.ac.uk/Risks/


The problem with a journal of bad results is that it would only be useful if you kept meticulous notes while getting bad results. Which functionally turns them into good results which can be published normally, since you now have enough content to write something like "<Phenonmenon> not found under <conditions>" which is basically how you call out the original authors.

The reality of when you can't reproduce something is usually some combination of the authors did not report something they considered minor, but which turned out to be important, and/or that you're just bad at it, but in both cases your actual research probably wasn't trying to study that phenomenon - you wanted to use it for something else.

EDIT: I will say, what would be great would be if live-streaming your experiments became a thing (so really just recording them) at least in chemistry (where I worked). There would be immense value in being able to pull the recordings of someone getting the result they claim so you can see their whole technique, setup, lab and process - because that's where the important details creep in.


The issue is that there already is so much being published that it is nearly impossible to keep up with in many fields. Any solution that includes more publications just misses the mark and is detrimental IMO. We will only get to better science if we move to a situation were publishing less is acceptable, because the only way we can detect dishonesty is if scientists actually have enough time to follow up on weird results.


> would anyone really care to subscribe to, let alone publish in, the American Journal of Discarded Hypotheses, the Annals of Failed Experiments, or PLOS Dumpster

Actually, this seems like it would be really useful. When I'm about to do something expensive, I usually try to find information about whether it's likely to work and how it could go wrong.

A journal of failed experiments might prevent a lot of duplicated effort.

Plus, of course, making meta-studies a lot more reliable.


> A journal of failed experiments might prevent a lot of duplicated effort.

We need to consider first that duplicated effort is not necessarily wasted effort. It could be that one scientist who tried the experiment and found that it failed didn't account for certain externalities, or just did a poor job. Maybe trying it again is just what needs to happen for a discovery to be made.


The problem seems to be how do you choose what studies to publish? And how do you find anything in the mountain of papers? Or is there a separate journal of failures for each discipline?


I think this very incrementalist, formulaic view of science that stresses replication, falsification and so on is in many ways broken. For one as Russ Ackoff used to point out, one cannot get what one wants by merely not doing the wrong thing. There is an infinite amount of false or uninteresting theories, and so you end up with Borges library of Babel, which in many ways is the status quo already of automated, bureaucratic science, which produces almost endless amounts of low impact research.

I think a better way to do science is not to rely on honesty or processes, but to do science with practical goals in mind and to be ambitious enough to produce results that will assert themselves by virtue of their impact.

The Manhattan Project, mentioned in an ethical context in the piece is to me a good example of a scientific project that did not rely on ordinary scientific veracity or processes, but on goal-oriented work with a positive rather than negative (in the technical sense of the term) result in mind, that one could simply not argue about. It also explains I think why a lot of cutting edge research today has moved into the corporate sector. Rather than being constrained by academic formalisms there is focus on novelty and big leaps.

the frequency of arguments about trust in science or honesty today to me is mostly an example of the absolutely low and marginal impact and irrelevance of much research. Operation Warp Speed for me is a good modern example of how science ought to be done. Very much not in the spirit of debate or scientific bureaucracy but by using new technology to pursue a goal where success or failure would be obvious.


Pennock's books look interesting and I wish I had more time to digest them. "Curiosity and the Moral Character of Science" especially so.

His parting line "Integrity in science involves a community of practice, unified by its shared values." is really powerful. We cannot do it alone.

Notwithstanding stumbling across a new atom bomb, physicists have it easy compared to computer scientists, I would say.

Our problem is that to the extent we are scientists, in the business of "Truth", our lot now inescapably intersects with the scurrilous worlds of advertising, surveillance and malinfluence - the world of deception and concealment. Computing is embattled to maintain its integrity against the worst sides of its principle funding and applications.

One cannot neatly separate out contemporary software issues of privacy, dignity, agency and freedom, from scientific integrity while using tainted tools - which is what I think Pennock means by a moral community of practice.


There actually was a Journal of Negative Results in Biomedicine, but it ceased publication in 2017.


Since 2019, there exists "The Conference for Failed Approaches and Insightful Losses in Cryptology", a.k.a. CFAIL [0]

[0]: https://www.cfail.org/


> International Journal of Interesting Negative Results

"Interesting" results would presumably include failures to replicate the results of other researchers. Those need to be published.

But really, for an honest scientist, aren't negative results at least as interesting as positive results? I'm assuming that a positive result is one that confirms some theory or assumption; so a negative result is actually more interesting, in that it kicks over the cauldron, and forces a re-evaluation of previous theories and assumptions. A positive result means "All is groovy; rudder amidships, full ahead". A negative result means "Wait - where exactly are we anyway?"


Is there a journal that only publishes properly reproduced results? I mean, a scientific paper is basically a draft until others manages to reproduce it, only at that point is it reliable, so it seems like there would be great value having such a journal.


They are called review journals (and then eventually textbooks).

It’s a category error to imagine any single paper is “true” - it’s basically just a bit of semi- reliable evidence that contributes towards finding the truth. The process is robust (over sufficient time) to errors in any individual paper.


> "Moreover, would anyone really care to subscribe to, let alone publish in, the American Journal of Discarded Hypotheses, the Annals of Failed Experiments, or PLOS Dumpster?"

I should hope so! With the right format, it would basically be StackOverflow for researchers. Interesting questions with hard solutions, attracting the attention of the experts who create discussion? Who WOULDN'T care about that?


Too bad this article doesn't talk about the registration of studies before doing them, this is a kind of 'utter honesty'..


TL;DR. I support widespread scientific fraud, so the market can then save us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: