Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What is medicine's 5 sigma? [pdf] (thelancet.com)
26 points by jeffreyrogers on May 25, 2015 | hide | past | favorite | 15 comments



What a lot of nonsense. I would have hoped for more from a journal that facilitated one of the greatest and most harmful scientific frauds of all time when they published the Andrew Wakefield paper about autism and vaccination [1]. The Lancet also doesn't publish data except what they can stuff in a single PDF and sell their journal through the glorified academic extortion racket Elsevier. So yes, thanks for pointing out 'nobody is ready to take the first step to clean up the system.' Completely empty rhetoric.

[1] http://en.wikipedia.org/wiki/Andrew_Wakefield


Can we determine ,by looking at papers if a research is replicated ?

Can Google determine that ?

If so , adding a Google Scholar search filter for replicated, proven research could be useful, and might even make verified works more citable, hence improving incentives somewhat.


Cited does not equal verified.

Negative results often go unpublished, yet may be just as important to the working researchers.

Until financial rewards are divorced from citation rates, this will remain a problem.


Sure , cited doesn't mean verified, you need to read more deeply than that. But if a paper from a reputable source says - "We replicated X", or "X was replicated in Y" , those are useful data points with regard to an article being verified.

And if Google understand that and help find that, it's far easier to find verified research, so it makes it more likely that i'll base my research on that , and cite them.

And if it's more likely to be cited, maybe more verification work will be done.


Why is there no market for negative results? It must be cheaper to pay for a paper about what doesn't work, vs. inadvertently repeating a bound-to-fail experiment. One would think that grant funders would insist on a review of known failures, before funding "new" research.


> Can we determine ,by looking at papers if a research is replicated ?

https://en.wikipedia.org/wiki/Meta-analysis is not trivial but a fairly sophisticated field in its own right. It'll be a long time before Google Scholar can replace the https://en.wikipedia.org/wiki/Cochrane_Collaboration


Wow. Disappointing. This is a like a tragedy of the commons , nobody is willing to take the first step. I think severely hard remedies measures are needed, "better peer review" is not going to cut it. For example, make it a requirement that a finding is reproduced in an independent lab before publishing.


Unfortunately that proposal would favour the well-funded, since research is incredibly expensive and time-consuming.

I like the idea of a compulsory (or at least public and open) registry of all research. All trials and other types of research would be registered before they are started. The design would then be publicly available, and progress could be tracked from proposal through to publication.

This would potentially highlight several weaknesses in the peer review process, allowing them to be dealt with. Some p-hacking may be revealed, by showing where designs had altered from proposed design to published design. It would show all trials which have failed to be published, potentially highlighting publication biases and negative results.



Interestingly, synthetic chemistry is a good example of a field that has policed itself well. I don't know if it's an official rule anywhere, but synthetic chemists follow an rule of performing a pretty good number of independent tests on any new chemical, such as IR, NMR, mass spectrometry, crystallography, and elemental analysis. Plus they need to have a plausible explanation of how the molecule was formed.


Fairly well. There's still Corey yield. The sezen/sames crisis happened too. I had a grad school friend who sent a sample to the crystallography service and got the structure he was looking for when he submitted salt.


That would simply result in collusion between "independent" labs. The problem here is bad faith.


One thing is for sure, changing alpha is not the answer!

I thought the article was going to be about six sigma and Japanese theory of manufacturing. For a while the article was going there, with a mention of changing incentives and rewarding honest criticism.


Yeah, if there's anywhere I don't want to see using six-sigma alpha, it's medicine. That's a death sentence for hundreds of millions of people, at a minimum.

(Alpha is only part of the tradeoff with power and sample size; since we're not going to get sample sizes boosted into the millions for medical trials, six-sigma could only be achieved by trashing power to the point where they never reach any positive result at all.)


This article has at least one thing right, the practical incentives in much of science run counter to what we might reasonably agree are the right outcomes to target.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: