There is always the risk with science that you will "finish" which is to say you'll figure something out and by and large be done with it. Nothing else to see, etc. I read an article many years ago about the long lived efficacy of Asprin, or acetylsalicylic acid. Here is a drug that has been around literally since the stone age where folks chewed willow bark and it continues to be a 'wonder drug' whereas more modern drugs seem to arrive on the scene and fade away. While their were obvious cases like antibiotics which became ineffective against resistant bacteria, there was a number of drugs which seem to have never been effective, rather they were marketed, pushed really hard, and then faded. The paper observed that for all their lack of sophistication, witch doctors and other lay healers had a vested interest in keeping things that worked and not keeping things that didn't work. But that doesn't always seem to be the case for modern medicine. Rather there is the 'new recommendation' vs the 'old recommendation' but rarely any follow up on whether or not the old recommendation or the new recommendation actually do anything positive toward treatment.
He's a physician who writes mostly about bad science reporting, pseudo-science and quackery and Bad Pharma is all about the tricks that pharmaceutical companies get up to to ensure that trials with negative outcomes never see the light of day and to try and spin their products in the best posible light - like telling doctors that drug X is more effective than a placebo, but failing to mention that it's no more effective than existing drugs on the market. Ultimately, if the doctors who are prescribing the drug don't have the complete picture, how are they supposed to make an informed choice about what to give you?
I heard him talk a few weeks ago and while I might not call him 100% unbiased (I think that some of his allegations are a touch exaggerated in terms of their potential harm) he's definitely very interesting and eye-opening.
And more recently, the proposed UK copyright reform - also expected to become legislation in the next year - included a specific fair dealing right for data mining on journals, like the work of Ionnadis. The publishers had been arguing that this was a separate right from reading (which the researchers have already paid for)
(link is to discussion of the older Hargreaves report; the exception made it into the final proposal)
It's not just business concerns though. Many of the individuals doing medical research are just flat-out unqualified/untrained to design a study and manipulate and analyze data. I'm not a statistics guru like some on HN, but I'm pretty good at knowing what to do with a given set of data. I'm exposed to the research efforts of medical professionals through my social network and fiancée, and generally I find their methods/analyses to be either explicitly bad or misguided. Two easy examples would be a general lack of a correction for alpha-inflation and little attention paid to normality of data.
Yes, it's anecdotal experience, and that must be recognized. But, in my observation, it's pervasiveness is noteworthy.
For example, my father is an oncologist who does a lot of clinical trials. When it comes to the number crunching side of the trials, he won't be doing any of it himself but relying on data analysts who work for him (who have statistics backgrounds) and statisticians who work for the MRC (Medical Research Council), CRUK (Cancer Research UK) and the academic institutions which his department is affiliated with.
My experience is anecdotal as well, but it largely can be summed up as 'doctors who are doing serious research and/or trials ensure that the stats are solid, however most doctors aren't doing research at that level'.
"Stanford orthopedic surgeon Eugene Carragee"
"One driving force is John P.A. Ioannidis, chief of the Stanford Prevention Research Center."
"Stanford's Marcia L. Stefanick, PhD '82, a professor of medicine"
"Robert Tibshirani, a professor of health research and policy, and statistics at Stanford"
"Dean of Medicine Philip Pizzo"
With the exception of "Muin J. Khoury, director of the Office of Public Health Genomics at the Centers for Disease Control and Prevention, has worked with Ioannidis" I can find nothing in this article that offers any counter to this piece which, while it could be correct, is clearly slanted toward touting the work that these people at Stanford are doing.
Not that being well balanced (presenting opposing viewpoints) is necessary in a university magazine but there is no corroboration to what they are saying here. This is not the same as saying they are not correct I am just pointing out that I don't accept it on it's face and seems to be the type of thing that would get repeated by the main stream media (the same way the studies that they are complaining about get repeated).
Edit: Clarified some things.
It strikes me that this author happened upon a bunch of interesting Stanford people doing relatively related work. Sure, the author was trying to write an article about Stanford, but it's not like these people are inappropriate to mention in the same breath.
The study did (relatively) conclusively (in a certain population of females with long term Rx) show that HRT can increase cardiovascular risk factors.
This is useful information as women have lower CVD risks than men up to menopause (Framingham study showed reduced CV risk for women still cycling over those of same age with menopause) and it is thought that oestrogen was the protective factor, therefore giving HRT would be protective.
The women studied were all post menopausal (many more than 1-2 years out) had varying risk factors for disease. I.e they had not just reached menopause, and thus represent a different population to that which would normally be prescribed HRT. Besides increased heart disease, stroke and VTE the study group also had reduced Colorectal cancer and Hip fractures that were statistically significant.
Subsequent to this study, the KEEPS study (which may not have been published yet) randomised women to placebo, transdermal HRT and oral HRT, and which selected patients more appropriately as they were going through menopause, is showing no significant difference in CV outcomes or stroke, blood pressure or cancers between all groups.
Another study, the DOPS study, a 10y follow up of 1000
Women over 50 on either HRT or placebo shows decreased CVD and no change in stroke or cancer risks.
This study overturned the notion that for the general female population, HRT would exert various cardiovascular benefits.
Nowadays it is established practice to only use HRT for severe symptoms of menopause (severe, lifestyle crimping hot flushes, mood changes etc) which in some women can last for many years. However treatment is meant to be reviewed regularly and tapered down.
The HRT study mentioned in the article (WHI study) looked at the effects of starting HRT, often years after menopause, often in women with existing morbidity.
Those women who started it at the correct time ie healthy women aged around 50, lowered their all-cause mortality by 25%+.
Thus, the original analysis of HRT WAS correct. Many women have died young since the WHI published with its erroneous conclusions, who would not had they taken HRT.
As an example, my friend is doing honours research right now, and her supervisor is very emphatic that she should do an experiment that will be 'successful' so she can publish it. My question is, doesn't performing an 'unsuccessful' experiment progress the state of the art as much as proving something? If a reasonable researcher might have an idea, and I prove it won't work, why wouldn't that be shared within the community as a negative result?
Let's distinguish two types of "negative" studies: one is a study whose result simply failed to be positive. "We performed exome sequencing in a kindred but failed to identify a common mutation in all affected members of the family." There are so many possible error modes, including erroneous fundamental assumptions about where this family's defect lies, that this is possibly uninteresting to virtually anyone.
The second type is a study that was done in a way that was powered to detect a positive result, had it been there. "We performed a randomized controlled trial in 1,000 patients with hemochromatosis to determine whether iron chelation was non-inferior to bloodletting for the treatment of hemochromatosis. We found that iron chelation failed to achieve non-inferiority." In this example (which is made up), this is a negative result that teaches us something immediately, and which could change practice (or could tell everyone that practice is not about to change despite rumors of a potentially new therapy).
Along these lines, to prevent these valuable yet "negative" trials from going unpublished, high-end medical journals such as the New England Journal of Medicine are now requiring that all trials enter into a registry.
Problem is, you don't want a high publication count in such a hypothetical journal. That's the other side of publication bias.
But perhaps I'm being too cynical (must be the New Year). Such a journal has been constructed.
With luck, it will be wildly (un)successful :).
a lot of people are kind of grumpy about null hypotheses not being published, for example papers of the form "we tried X and weren't able to make it work, here is how we tried". that is an 'unsuccessful' result (maybe you just suck), but some people would like to be able to read through stories of failure to see if there is a way to be successful. the journal of the null hypothesis (http://www.jasnh.com/) is an effort to do something like this, but it's generally the punchline of jokes...
He has a website at http://www.badscience.net
He also wrote a book titled Bad Pharma. The Foreword contains me insight into just how drugs are tested http://www.badscience.net/2012/09/heres-the-intro-to-my-new-...