Hacker News new | past | comments | ask | show | jobs | submit login
Lies, Damned Lies, and Medical Science (theatlantic.com)
82 points by jamesbritt on Oct 15, 2010 | hide | past | favorite | 27 comments



Much of the solution to this is straight forward. Methodologies should be clearly expressed in detail at the start of the paper; all studies should be publicly logged before they start (so it's not possible to hide studies which say the "wrong" thing); resources should be made available so factors such as randomisation can be properly understood and executed with little or no additional effort. The more information is available the more likely

I'd go on but Ben Goldacre has said it all far better than I could, and in far greater detail in his book Bad Science which I think has just been released in the US (http://www.amazon.com/Bad-Science-Quacks-Pharma-Flacks/dp/08...).

What is important though is not to assume that because the evidence for conventional medicine is sometimes weak, that that in some way makes the case for less mainstream alternatives.

Flaws in one do not in any way strengthen the other, and for alternative medicine the evidence and studies are almost always either even weaker or non-existent.


...all studies should be publicly logged before they start (so it's not possible to hide studies which say the "wrong" thing);...

Not just logged, but accepted by journals. I.e., you submit a paper, explain the experiment design, and put in dummy tables. The conclusion section is unwritten.

The journal accepts or rejects based solely on methodology - they can't reject you after the fact for going against conventional wisdom or getting a null result.


Pre-publication registration of trials has happened. The major journals agreed they would only publish trials that were registered prior to publication. There are sites for this (www.clinicaltrials.gov). Unfortunately, I saw a paper recently which found that almost half of registered trials had significant differences between the registered and published methodologies, and no one was checking on this very carefully.


Flaws in one do not in any way strengthen the other, and for alternative medicine the evidence and studies are almost always either even weaker or non-existent.

exp(This).


Long article, but one I found particularly interesting as I have a medical background.

The article presents it as if it was some big revelation in 2005, but I'm not sure it was as big as implied. I mean some of the specifics were, but I trained in my residency between 2000-2003, and we were very much trained to be skeptical of studies, and to question results. Evidence-based medicine was in strong force.

His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.

Non-randomized studies are the most common, of course. They are the cheapest to perform by far. but whenever I read non-randomized study, I think "interesting" but realize it doesn't mean anything. Correlation does not equal causation. These non-randomized studies though are the ones that raise questions and possibilities that later fund/justify more costly randomized-controlled studies.

And something to be aware, which is perhaps part of the purpose of this article, is that there is a reporting/publishing bias. Negative and neutral results simply don't get reported in journals. You form a hypothesis, perform a study, and get negative results -- well, you're not going to try to publish it. Statistically speaking, there's going to be some bell curve distribution about the actual result. So let's say the actual result is "0" (no effect) for a drug or treatment. If you do enough studies, you'll get a few that fall on the positive side, which I presume is what accounts for some of the false positives.


"You form a hypothesis, perform a study, and get negative results -- well, you're not going to try to publish it."

This seems fundamentally incorrect to me. Wouldn't research all around just fail if no one ever reported their negative results, thus dooming many other researches to performing the same fruitless experiments?


It's actually worse than that, if a lot of studies are being done:

Let's say that you only publish when you discover an effect with a p-value of better than 0.05 -- that is, when you believe that, if the effect weren't real, then the probability of observing an effect at least as extreme as the one you got has less than a 5% chance of happening. This is pretty typical.

Let's also say that you and 19 other groups are studying an effect that isn't real: the hypothesis that meditating on pink unicorns will get rid of skin cancer.

By (perfectly reasonable) chance, 19 of your studies reject the Pink Unicorn Hypothesis with p-value = 0.05, and one accepts it -- i.e., one group gets a result that should have happened 1/20 times or less if there is no Pink Unicorn effect.

Since the first 19 groups are silent, and only one group publishes, the only thing we see is the exciting announcement of a possible new skin cancer cure, with no hope for a meta-study that notices that this actually the expected result given the null hypothesis.

So yeah. That's bad.


In this case, those other 19 groups ought to respond publicly fairly quickly--"we tried that too, and it didn't work for us."


No they usually don't. Because the "didn't work for us" is usually not conclusive proof of the contrary.

It does (rarely) happen in Physics where everything is expected to be repeatable, and results from one experiment carry over to similar experiments. It almost never happens in medicine, where the bar of acceptance of a hypothesis is already ridiculously low.


You can publish negative results, but the bar is usually higher. It's easiest if you find some new "positive" reason for the negative result, so you can have a narrative along the lines of: you might think X would work, and here are all the reasons it's plausible, which we used to believe too, but it turns out it doesn't, because of Y.

If you don't have a reason for the failure, just "hmm, didn't seem to work", you can still publish, but it's harder. The next-best case is if you have a large-scale study failing to find a result for something that many other people have claimed should exist, e.g. power-line cancer studies. But if it isn't in that category, it's harder. The fundamental problem is that nobody wants thousands of paper saying "X doesn't cure cancer. X2 also doesn't. X3, once again, does not cure cancer", because the vast majority of Xs don't do Y.


I guess the motivation behind the journal mentioned in the article.

He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how “interesting” the results may be.

edit: though looking at the site, it doesn't seem to present itself with that motivation. It also costs $2900 to publish an article, so there's some financial hurdle to publishing.


The human body:

metabolism -> cell damage -> pathology -> death.

Medical Science intervenes in between pathology and death. It's kind of unreasonable to have high expectations for medical science given it's so late in the chain. A better solution is to reduce the rate at which metabolism causes cellular damage. There are only 7 different errors of metabolism leading to damage:

cellular loss/atrophy, death resistant cells, nuclear mutations and epimutations, mtDNA mutations, protein crosslinks, junk accumulating inside cells, and junk accumulating outside cells.

Instead of wasting money on bad correlation studies, address those 7 things and we'd be lightyears ahead in terms of helping people. Instead of dumping all research money into attacking manifestation-of-cellular-damage pathology foo, we should invest in basic scientific research giving us a more rigorous definition of cell biology.


I think we can and should be investing in both. Or rather, in the whole spectrum between basic science and direct medical research. The kinds of systems that make a good biology paper are still pretty abstracted from anything directly applicable, so its good to know that the science still gives you the right result on real people.


Medical research is mostly paid by the pharma industry; this industry as a whole is NOT interested in preventing people from getting ill, for obvious reasons. Another nice example of market failure :)


The pharma industry would not want to find a preventative pill that they can sell to 300 million healthy people instead of 500,000 sick people?

That doesn't make a lot of sense.

(Incidentally, the #1 and #3 drugs are preventative, not curative. http://www.vaughns-1-pagers.com/medicine/prescription-drug-s.... )


Fixed link: http://www.vaughns-1-pagers.com/medicine/prescription-drug-s...

Are Lipitor and Plavix truly preventative? Aren't they prescribed when there are already issues with cholesterol and the heart?


Thanks for the fix.

You take Lipitor/Plavix when you are at risk of certain blood flow related diseases (heart attack, stroke). The goal is to prevent these diseases or delay their onset in people with an elevated risk. Isn't that what we mean by preventative?

Vaccines probably also fall into this category. They are more widely used, although not exactly moneymakers [1].

[1] They used to be a small, positive revenue stream. Drug companies are trying to get out of the business now, due to fear of lawsuits by Jenny McCarthy and her ilk.


Hummm, I think "preventative" for me means focusing on causes. Such drugs may act on symptoms (I don't know how effective they are, wouldn't surprise me in light of this article if they are not, esp. considering side-effects), but would not IMO be in the same class of preventative measures as exercising regularly, sleeping well, eating well, socializing regularly, de-stressing, etc. - all of which act "upstream" on our overall health levels.

In other words, if you eat fast food, watch TV for hours each day, don't exercise, work too much and sleep too little, then develop bad numbers on your blood tests, taking a drug to improve those numbers a bit shouldn't be considered preventative.


If you define "preventative" to mean "lifestyle factors", then you are correct that drug companies won't spend much effort researching them. Similarly, if you define "search" to mean "searching for oil", then Google doesn't do much research on search.


You may disagree with the colloquial usage/meaning of the word "preventative," but in the context of health care, I'd argue it's a worrisome sign when the scope of that word expands from lifestyle changes (which even doctor friends would often argue for) to also include drug interventions. Though certainly the pharma industry would love that being true.

Disagree with your other analogy: I don't think anyone outside the oil industry would confuse the word "search" to mean surveying for oil. Oil-men have their own vocabulary for that anyway.


That's not true. If they could find something to prevent people from getting ill it would probably be the biggest seller ever, and given that they are not God there will probably always be ill people, even if they prevented some illnesses.

The don't do it because they don't know how, not because of any sort of conspiracy.


You seem to forget that big pharma actually invents new diseases, to sell drugs that target them.


Do you mean `invent' in the ontological sense?


While reading this, two things kept popping up in my head:

1. Science and medicine are as much about marketing as anything else out there. If you're not making headlines, you're going nowhere.

This is not new by any means. The story of evolution and how Darwin "beat" Wallace to fame is a good example. History is littered with similar stories.

2. This is the peer review process working in full swing. Dammit it's very slow, but it works - eventually. Journal papers are supposed to be peer reviewed to catch the obviously wrong papers, and the downside is explained well in the article. However, the key is whatever bad research slips through is caught by repetition and meta analysis. And if these repetitions and analyses are themselves flawed, others will point them out.

It's exactly like open source. No one guarantees that open source produces bug-free code but we trust that the bugs will get weeded out faster.


To your second point, I have two things to say:

1) I think a big point of this article is that the peer review process is misunderstood by the general public (and perhaps many doctors).

  Though scientists and science journalists are constantly talking up the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies easily slip through it. Nature, the grande dame of science journals, stated in a 2006 editorial, "Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth."
More work needs to go into educating the public as to the meaning of scientific studies.

2) The scope of money involved in the endeavor is too large to be happy with things just slowly weeding themselves out as is permitted in the OS community.

Higher standards can and should be enforced in all medical studies, an area in which progress is being made but there is a ways to go. Let me push back and say, if the medical research community was exactly like the open source community, we're not asking much of those who are doing the research.


I know what peer review is. And to your point about the demand we make of researchers and the stakes involved, have a read of the Cochrane Collaboration website. Most of their conclusions are "no real evidence, need more studies". It's sobering how much is out there that doesn't stand to really demanding analysis.


The idea that the process of science is largely about producing a lot of wrong answers in the process of sifting for the right ones is not novel. But it is somehow the nature of this age for people to trumpet every study that makes it into the public eye as True and Wondrous.

So patient advocates get stuck with the additional duty of having to explain that a good half of what you read is wrong, and evaluating science for the layman is not simple. Possible, and not hard, but requires some work:

http://www.fightaging.org/archives/2009/05/how-to-read-the-o...

"The scientific method is the greatest of inventions: when used to organize and analyze the flawed output of we flawed humans, it leads to truth and discovery. It is how we sift the gems of progress from the rubble of short-sighted human nature, magical thinking, willful ignorance, and other self-sabotaging but entirely natural behaviors.

"The scientific community doesn't produce an output of nice, neat tablets of truth, pronouncements come down from the mountain, however. It produces theories that are then backed by varying weights of evidence: a theory with a lot of support stands until deposed by new results. But it's not that neat in practice either. The array of theories presently in the making is a vastly complex and shifting edifice of debate, contradictory research results, and opinion. You might compare the output of the scientific community in this sense with the output of a financial market: a staggeringly varied torrent of data that is confusing and overwhelming to the layperson, but which - when considered in aggregate - more clearly shows the way to someone who has learned to read the ticker tape."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: