Hacker News new | comments | show | ask | jobs | submit login
Something Doesn't Add Up: John Ioannidis and medical statistics (stanford.edu)
52 points by jseliger 1604 days ago | hide | past | web | 23 comments | favorite

This is a good read, it is hard to find things written about the issue of medicine vs the business of medicine.

There is always the risk with science that you will "finish" which is to say you'll figure something out and by and large be done with it. Nothing else to see, etc. I read an article many years ago about the long lived efficacy of Asprin, or acetylsalicylic acid. Here is a drug that has been around literally since the stone age where folks chewed willow bark and it continues to be a 'wonder drug' whereas more modern drugs seem to arrive on the scene and fade away. While their were obvious cases like antibiotics which became ineffective against resistant bacteria, there was a number of drugs which seem to have never been effective, rather they were marketed, pushed really hard, and then faded. The paper observed that for all their lack of sophistication, witch doctors and other lay healers had a vested interest in keeping things that worked and not keeping things that didn't work. But that doesn't always seem to be the case for modern medicine. Rather there is the 'new recommendation' vs the 'old recommendation' but rarely any follow up on whether or not the old recommendation or the new recommendation actually do anything positive toward treatment.

Ben Goldacre's Bad Pharma ( http://www.amazon.com/Bad-Pharma-Companies-Mislead-Patients/... http://www.amazon.co.uk/Bad-Pharma-companies-mislead-patient... ) covers some of the same ground.

He's a physician who writes mostly about bad science reporting, pseudo-science and quackery and Bad Pharma is all about the tricks that pharmaceutical companies get up to to ensure that trials with negative outcomes never see the light of day and to try and spin their products in the best posible light - like telling doctors that drug X is more effective than a placebo, but failing to mention that it's no more effective than existing drugs on the market. Ultimately, if the doctors who are prescribing the drug don't have the complete picture, how are they supposed to make an informed choice about what to give you?

I heard him talk a few weeks ago and while I might not call him 100% unbiased (I think that some of his allegations are a touch exaggerated in terms of their potential harm) he's definitely very interesting and eye-opening.

Buried inside this depressing article is a depressing story about a non-needle-stick syringe that has been prevented from hitting the market. There are a lot of innovation proof system and people within hospitals, good on you if you break through it all. http://www.washingtonmonthly.com/features/2010/1007.blake.ht...

It's worth mentioning that there is some good news on the horizon here. Bad Pharma has prompted the UK government into action (ok, proposals of action) on unpublished trials: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2267966/

And more recently, the proposed UK copyright reform - also expected to become legislation in the next year - included a specific fair dealing right for data mining on journals, like the work of Ionnadis. The publishers had been arguing that this was a separate right from reading (which the researchers have already paid for) http://www.out-law.com/en/articles/2012/june/publishers-and-... (link is to discussion of the older Hargreaves report; the exception made it into the final proposal)

> This is a good read, it is hard to find things written about the issue of medicine vs the business of medicine.

It's not just business concerns though. Many of the individuals doing medical research are just flat-out unqualified/untrained to design a study and manipulate and analyze data. I'm not a statistics guru like some on HN, but I'm pretty good at knowing what to do with a given set of data. I'm exposed to the research efforts of medical professionals through my social network and fiancée, and generally I find their methods/analyses to be either explicitly bad or misguided. Two easy examples would be a general lack of a correction for alpha-inflation and little attention paid to normality of data.

Yes, it's anecdotal experience, and that must be recognized. But, in my observation, it's pervasiveness is noteworthy.

I'm on both side of the fence there. I'm more than aware that most doctors don't have the training or background for serious data analysis, but all the doctors I know who are doing research or trials that rely on serious data analysis aren't doing the analysis themselves anyway.

For example, my father is an oncologist who does a lot of clinical trials. When it comes to the number crunching side of the trials, he won't be doing any of it himself but relying on data analysts who work for him (who have statistics backgrounds) and statisticians who work for the MRC (Medical Research Council), CRUK (Cancer Research UK) and the academic institutions which his department is affiliated with.

My experience is anecdotal as well, but it largely can be summed up as 'doctors who are doing serious research and/or trials ensure that the stats are solid, however most doctors aren't doing research at that level'.

In my experience most of them do bring in someone for the stats aspect of it, but they're often not consulted on study design (when relevant), and their "experts" typically seem to be just "experts relative to the other authors" rather than actual stats experts. But that doesn't change the probable truth in your final statement.

Stanford people mentioned:

"Stanford orthopedic surgeon Eugene Carragee"

"One driving force is John P.A. Ioannidis, chief of the Stanford Prevention Research Center."

"Stanford's Marcia L. Stefanick, PhD '82, a professor of medicine"

"Robert Tibshirani, a professor of health research and policy, and statistics at Stanford"

"Dean of Medicine Philip Pizzo"

With the exception of "Muin J. Khoury, director of the Office of Public Health Genomics at the Centers for Disease Control and Prevention, has worked with Ioannidis" I can find nothing in this article that offers any counter to this piece which, while it could be correct, is clearly slanted toward touting the work that these people at Stanford are doing.

Not that being well balanced (presenting opposing viewpoints) is necessary in a university magazine but there is no corroboration to what they are saying here. This is not the same as saying they are not correct I am just pointing out that I don't accept it on it's face and seems to be the type of thing that would get repeated by the main stream media (the same way the studies that they are complaining about get repeated).

Edit: Clarified some things.

I don't think it would be overly cynical to claim that the entire point of university magazines are to convince alumni to donate.

As a non-Stanford person, I will say that Ioannidis is a reasonably big deal, and in my mind Robert Tibshirani is quite a big deal as well. The WHI was a watershed study, therefore Stefanick is relevant.

It strikes me that this author happened upon a bunch of interesting Stanford people doing relatively related work. Sure, the author was trying to write an article about Stanford, but it's not like these people are inappropriate to mention in the same breath.

Just going to clarify a point made about the WHI Study of HRT in post-menopausal women.

The study did (relatively) conclusively (in a certain population of females with long term Rx) show that HRT can increase cardiovascular risk factors.

This is useful information as women have lower CVD risks than men up to menopause (Framingham study showed reduced CV risk for women still cycling over those of same age with menopause) and it is thought that oestrogen was the protective factor, therefore giving HRT would be protective.

The women studied were all post menopausal (many more than 1-2 years out) had varying risk factors for disease. I.e they had not just reached menopause, and thus represent a different population to that which would normally be prescribed HRT. Besides increased heart disease, stroke and VTE the study group also had reduced Colorectal cancer and Hip fractures that were statistically significant.

Subsequent to this study, the KEEPS study (which may not have been published yet) randomised women to placebo, transdermal HRT and oral HRT, and which selected patients more appropriately as they were going through menopause, is showing no significant difference in CV outcomes or stroke, blood pressure or cancers between all groups.

Another study, the DOPS study, a 10y follow up of 1000 Women over 50 on either HRT or placebo shows decreased CVD and no change in stroke or cancer risks.

This study overturned the notion that for the general female population, HRT would exert various cardiovascular benefits.

Nowadays it is established practice to only use HRT for severe symptoms of menopause (severe, lifestyle crimping hot flushes, mood changes etc) which in some women can last for many years. However treatment is meant to be reviewed regularly and tapered down.

Thank you.

The HRT study mentioned in the article (WHI study) looked at the effects of starting HRT, often years after menopause, often in women with existing morbidity.

Those women who started it at the correct time ie healthy women aged around 50, lowered their all-cause mortality by 25%+.

Thus, the original analysis of HRT WAS correct. Many women have died young since the WHI published with its erroneous conclusions, who would not had they taken HRT.

As a serious question, do papers typically get published if they simply disprove a promising idea?

As an example, my friend is doing honours research right now, and her supervisor is very emphatic that she should do an experiment that will be 'successful' so she can publish it. My question is, doesn't performing an 'unsuccessful' experiment progress the state of the art as much as proving something? If a reasonable researcher might have an idea, and I prove it won't work, why wouldn't that be shared within the community as a negative result?

No, and this problem even has a name: publication bias. It's a known issue in most modern peer-reviewed research. There are now multiple efforts underway to counteract its effects, such as a US regulation called FDAAA 801 and its resulting data registry clinicaltrials.gov.

There is something of a sea change going on right now (perhaps it's still too early to use that term, but the tide is surely shifting).

Let's distinguish two types of "negative" studies: one is a study whose result simply failed to be positive. "We performed exome sequencing in a kindred but failed to identify a common mutation in all affected members of the family." There are so many possible error modes, including erroneous fundamental assumptions about where this family's defect lies, that this is possibly uninteresting to virtually anyone.

The second type is a study that was done in a way that was powered to detect a positive result, had it been there. "We performed a randomized controlled trial in 1,000 patients with hemochromatosis to determine whether iron chelation was non-inferior to bloodletting for the treatment of hemochromatosis. We found that iron chelation failed to achieve non-inferiority." In this example (which is made up), this is a negative result that teaches us something immediately, and which could change practice (or could tell everyone that practice is not about to change despite rumors of a potentially new therapy).

Along these lines, to prevent these valuable yet "negative" trials from going unpublished, high-end medical journals such as the New England Journal of Medicine are now requiring that all trials enter into a registry.

Medical Journals tend not to be interested in publishing 'negative' result papers. I can see why to an extent - they're not of particular value or interest to the vast majority of the journal's readers. They are however exceptionally valuable to anyone else running trials or doing research into similar drugs in the future, so there should be, IMO, somewhere where those papers can be made publically accesible.

"Journal of Plausible, but Ultimately Unworkable Hypotheses"

Problem is, you don't want a high publication count in such a hypothetical journal. That's the other side of publication bias.

But we have computers! This would be final step of proposing a project; check that a negative, peer-reviewed version hasn't been published. Putting them in separate journals actually solves a lot of the complaints in this thread

Sorry for the confusion, that wasn't my point. The issue isn't that there would be a ton of research to wade through, the issue is that scientists whose research didn't pan now have a track record of failure. I can easily imagine a academic position hiring conversation going like "This guy has a higher failure to success ratio than average, and his successful projects aren't above average."

EDIT: But perhaps I'm being too cynical (must be the New Year). Such a journal has been constructed.


With luck, it will be wildly (un)successful :).

You as a researcher do not want your name to appear in such a journal. Especially not lots of times.

yes. most likely, what the supervisor means is that the research should have some kind of story and conclusion. an 'unsuccessful' piece of research is one that does not say anything, i.e. "well we tried to replicate this other paper and you know what, we did" or "I had an idea and I thought really hard about it for about a year, but nothing really concrete ever presented itself in either theory or experimentation".

a lot of people are kind of grumpy about null hypotheses not being published, for example papers of the form "we tried X and weren't able to make it work, here is how we tried". that is an 'unsuccessful' result (maybe you just suck), but some people would like to be able to read through stories of failure to see if there is a way to be successful. the journal of the null hypothesis (http://www.jasnh.com/) is an effort to do something like this, but it's generally the punchline of jokes...

Sadly no mention of Dr Ben Goldacre who has also been beating the drum in this area. Here is a good talk he did at TED (13 minutes): http://www.ted.com/talks/ben_goldacre_what_doctors_don_t_kno...

He has a website at http://www.badscience.net

He also wrote a book titled Bad Pharma. The Foreword contains me insight into just how drugs are tested http://www.badscience.net/2012/09/heres-the-intro-to-my-new-...

I remember reading "Why Most Published Research Findings Are False" when it came out: it was both a great, bold article and the first PLoS paper that I read. I'm glad to see that Ioannidis has continued to study the subject and apparently is making at least some impact in medical research circles.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact