Considering that most of these studies are by people with an interest in getting a certain result, it's likely experimental errors/not publishing some results will bias the published numbers in a certain direction.
The scientific method has never been proof against people that lie by omission.
He's writing about results that were validated through replication. That is the scientific method. If what he's saying is accurate, the problem isn't a few bad apples (that's the proverbial reflexive defense of the status quo anyway).
But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain.
I have no idea if it's accurate, but Lehrer's a credible journalist, less prone to the sensational than most. If he's writing about this, it's probably because serious scientists are concerned about it.
(On another note, when I pasted the above, the following text showed up appended to my selection: Read more http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_... How can such a venerable publication as the New Yorker resort to such tackiness?)
The replication they are talking about in these particular pharma studies are of the nature that a pharma corp orders up twelve identical 8 person studies. Three studies come back showing their new pill helped a tiny bit, six show no the pill did nothing, and three show the patients got worse. They then cherry pick the three that showed it helped, and possibly toss in one of the ones that showed the pill did nothing just to cover up what they are doing. They then publish a paper showing that 3 out of 4 studies validated that the pills work. The other 8 studies are set on fire and never mentioned. And there is your multiple studies.
This is not some crazed conspiracy theory either, this (doing multiple very small sample size studies rather than one slightly larger study and discarding and never mentioning some of the studies with the least favorable results) is actually how it is now known to be done, and this methodology is the reason why some journals have started to say that all of the studies have to be registered in advance if they want to publish their results so that the companies can't selectively discard results like they have been doing.
Actually, it does. The likelihood that ESP actually works is significantly less than the likelihood of "the decline effect", which is explained by a number of perfectly rational factors (touched upon in the article.)
If you want to claim that "ESP is bullshit" (as you and I both do, I imagine), you somehow need to account for those (rare) studies which oddly enough seem to support an ESP effect at a statistically significant level. "The decline effect" goes a ways toward providing this explanation.
"But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain."
Good! That's what science is all about. Making theories that can later be "disproved" or "confirmed". It's when you cannot disprove something that it is no longer science.
This means nothing can ever be confirmed though -- you never know if you'll learn something later to disprove what you think you know now. Which in turn means a certain level of humility is required. To me, that's where science falls short -- anything that doesn't fit what is currently 'known' and 'proven' is not taken seriously.
>"He's writing about results that were validated through replication. That is the scientific method."
Experiments confirming phlogiston were validated through replication. The theory of phlogiston was however, incorrect. That's the problem with the scientific method.
The Theory of Phlogiston was discredited via the scientific method. It was in the act of replication that it got challenged. Phlogiston is an example of a theory that eventually got rejected as new facts were discovered and experimental design was critiqued, despite having about 100 years of apparent success.
Phlogiston theory was, however, useful to chemists. When it met its limits, and a better theory came along, they of course dumped it.
Apparently, the issue isn't scientific method per se, but of selfish interests who intentionally compromise experimental design and scientific debate for their own fame, glory, and gold.
The good news is that the these half-truths and lies will get in the way of someone else's research and agenda, and that same someone else will likely make a big stink of it.
Is there any non-paywall way to view the article? This looks like it might be interesting but I can only see the first couple paragraphs. The title makes me think of three things: Paul Feyerabend, Bayesian inference, and the increasing effectiveness of placebos. I'd be curious to see if any of them are in the article.
A rather sensationalist headline given that the scientific method already recognises the problems of human researchers and not only offers some mechanisms to counter those problems, but is eager for more.
I agree. The article's headlines has a distinct anti-intellectual flair, and yet the article itself points out human flaws in research rather than a fundamental flaw in the scientific method.
This is not so much a problem with the scientific method as a problem with self-interested applications of statistics. F still equals m*a hundreds of years later, and that's not about to change.
Yet I wonder if "the" scientific method has in fact been overapplied way past its sweet spot of mechanics and so on. It's by no means obvious that it works as well in, say, medicine, let alone psychology, let alone sociology, let alone economics. The fact that it's the cult of our age perhaps blinds us from asking the interesting questions about its limits.
Medical science seems, sad to say, incompetent (http://care.diabetesjournals.org/content/17/2/152.abstract rediscovers integration, and statistical illiteracy among doctors has been repeatedly reported) and corrupt (conferences in lavish locations, vendor-sponsored efficacy research).
I'm sure there are good researchers, don't get me wrong, but I'm also not surprised that some really questionable stuff gets reported.
I'm sadly, even less aware of the state of research in sociology, but I'd like to point out that non-scientifically-tested psychological theories tend to be complete nonsense (cf. lots of Freud's work). With respect to economics: the field has many problems, but a book like Freakonomics is a very readable argument in favour of the use of statistics in (micro-)economical science and related fields. (Do note that the scientist-author, Levitt, actually understands statistics. This is, I suspect, important.)
That said, there are issues with the scientific method, like the fact that it tells us to drop falsified hypotheses but does not, in itself, teach us how to find interesting results. That's not germane to your observation, though.
One reason the scientific method does not work on economics is that markets are anti-inductive[1]. If you can establish a regular pattern in their behavior, market participants start exploiting that pattern, and it goes away.
I'm not sure this is such a strong attack on the scientific method. First, there's much more to economics than predicting the stock market - a proper understanding of demand curves is useful even if everyone else has it as well. In fact, I'm not sure that predicting the stock market has more to do with mathematics than with mass psychology.
That said, I do agree that properties of the stock market, once widely known, tend to be arbitraged away. From the article:
> There was a time when the Dow systematically tended to drop on Friday and rise on Monday, and once this was noticed and published, the effect went away.
However, that doesn't mean that the scientific method cannot produce true conclusions - it's just that those don't tend to stay true.
lesswrong has much more impressive criticisms of the scientific method, IMHO. (The main one being that, while the scientific method will hopefully prevent you from endlessly clinging to a wrong hypothesis, it won't tell you how to find good hypotheses.)
Oh yes, there are definitely some problems with over-zealous applications of the scientific method (or rather, over-zealous discarding of other methods of reasoning) - but this article doesn't really cover them. Not everything of value can be measured in a double-blind experiment.
What's "the scientific method" you speak of? There main idea behind scientific reasoning is "Look for evidence to test things, don't be afraid to challenge the big boys"
There's quite a bit more to the scientific method than that-- the notions of controlled observation and reproducible results surely matter, as does the principle of parsimonious explanation.
The scientific method is fine. Note that this is about certain psychiatric pharmaceuticals being found not to have the efficacy that was earlier claimed, in studies done by pharmaceutical companies who doctored results and played statistics games and selectively hid results that didn't support profitability.
It's fascinating to see the level of hubris in the corporate pharmaceutical industry, that after cornered and confronted with the fact that for decades their self-serving fabricated so called research is fraudulent, they would now have a conference to announce that the scientific method has failed.
Did you read the article? It is not primarily about the pharmaceutical industry, or about doctoring results, although it touches on both of those topics. The argument is far more subtle, and more far-reaching.
The scientific method has never been proof against people that lie by omission.