Something is rotten in the core of science. These days I give the research the same weight I give the anecdote... and that is no weight at all.
Science has been supplanted by money and politics... At least anecdotes admit they're anecdotes!
I'm as critical as anyone (probably more so, check my comment history) of academic biology because of my background in it. There are certainly things wrong it. And due to the nature of biology, replicating results is really hard. It's a fact of life when you deal with systems that are not perfect, not identical and very opaque.
But to say that "Science has been supplanted by money and politics" is stretching the problems of biology into a mountain of conspiracy.
Furthermore, I'm reading your "source" and it reads loudly as "I'm an underfunded big-pharma research who has neither the time nor the resources to properly replicate studies". Did you know that most big pharma labs do not have access to the academic literature? They mostly read abstracts because there is little budget to actually purchase the required papers.
How much do you trust labs that are A) only trying to recreate data so they can make a drug out of it and B) aren't even reading the original data? While academic labs can have grad students toil away on hard experiemnts for literally years before they perfect them...how long do you think Pfizer or Merck or Glaxco-Smith is going to let their paid researchers fiddle away on a project that is probably low priority anyway?
Because, of course, the high-priority projects are the reformulations of penis-enlarging drugs or cholesterol medication...you know, the ones that actually make money.
If you are looking for snake oil and shady research, I dare you to read any research paper that comes out of big pharma labs. We would routinely read them just for laughs because they are (often) downright terrible.
To say "most big pharma labs" do not have access to the literature is laughable. We had better access than most academic institutions. If we needed a paper we didn't have access to, it took a few hours to get it. The company was more than willing to pay the $50 to get a copy of whatever paper, since we would often blow $50 running one experiment. Many of the smaller biotech might have poor access to journals, but even then, if you could justify the cost, you could get it.
Second of all, yes I trust labs that are trying to recreate data to make a drug out of it. You have to remember that these attempts to recreate data were a very important data point on a potential multi-million (billion?) dollar investment in a new target, these are NOT low priority projects. They WANT the data to be true. They have zero incentive for the data to not be reproducible.
Having worked in both academic and commercial labs, I would say the incentive to "tweak" results in much great in academic labs for the following reasons:
1) Often results are never double checked in an academic lab unless the work is use in a later project. Contrast this with a pharma lab where if the data is positive, you'll have to prove it again and again.
2) Academics (both profs and students) live and die by papers, not so in academic (in fact, in the company I worked in, they preferred if you didn't publish)
3) Work in academic is often performed by relatively inexperienced ungrad and grad students, while big pharma scientists often have years of experience.
>To say "most big pharma labs" do not have access to the literature is laughable. We had better access than most academic institutions. If we needed a paper we didn't have access to, it took a few hours to get it. The company was more than willing to pay the $50 to get a copy of whatever paper, since we would often blow $50 running one experiment.
I'll admit that my knowledge of big pharma journal access is colored by those in big pharma that I've talked to (anecdotal evidence, oh the irony). Perhaps they just had poor departments or bad access, I don't know.
However, every university that I've been at has instant access to journals. I never had to wait hours for a paper...we had free reign of just about every journal. Even at my relatively small and poor undergraduate institute.
>1) Often results are never double checked in an academic lab unless the work is use in a later project.
99% of projects in academia are building off some previous grad student or post-doc's work. Sure, there are projects which are nearly impossible to replicate (I should know, I spent 1.5 years of my life trying to replicate a previous grad's project). But it's equally laughable to say that data is never double-checked - professor's career is a long string of projects building on previous projects.
>2) Academics (both profs and students) live and die by papers, not so in [industry]
I'll concede that there is often pressure to publish positive results in an academic setting. However, as you rightly mentioned, academics live and die by their papers. It just takes one lab refuting your paper to have a burned career. While I agree that many academics prefer to just ignore papers they can't recreate, there is still a lot riding on publishing replicable data.
>3) Work in academic is often performed by relatively inexperienced ungrad and grad students, while big pharma scientists often have years of experience.
This is a pretty baseless statement? I know plenty of techs working at big pharma that just graduated with an undergrad degree and have zero of wet-bench experience (just like I know of plenty who did the same in academia). Conversely, I can't even count the number of post-docs and senior scientists that work at various universities, with literally centuries of experience between them.
1. The big pharma guys have instant access to journals. When I say we had to wait a couple hours, it was because I was looking for a paper from "The Russian Journal of Chemistry" from 1912. We had a vendor who could track down anything. For any of the big journals, we had the same access as academia.
2. We agree on this point. If a lab experiment is used in a later project, it HAS to work or else the future work can't occur. However, lots of projects have "arms", where the experiment is an interesting observation that is never pursued. These are often "one-off" experiments that are published, but never repeated in the same lab.
3. I am by no means painting academics with a broad brush here. I think most academic research is done on the up-and-up and the results are valid, if not hard to replicate (this is research!). I think one issue is the one pointed out in the parent comment. You run 5 reactions, two fail and the three that work produce yields of 50%, 70% and 80%. What gets published? 80%. The devil is in the details. In big pharma, you are trying to make a drug and the science better work or else you can't bring it to market. Much higher standards for reproducibility.
4. I guess my thought here is based on the fact that big pharma typically hires from academic labs. All those post-docs and senior scientists with years of experience? That's who big pharma hires. So overall, I would imagine that the level of experience in big pharma is greater than the average you would see in academia (which makes sense since academia is training for working in places like big pharma).
Once again, I always shy away from descriptions that put all "big pharma" or "academic" researchers into one pile. There are brilliant people on both sides and crappy people on both sides.
Thanks for the useful counter-points...I'm now armed with some more anecdotes (hah!) on the other end of the "big pharma" spectrum.
They certainly had access to the original data. To quote:
> To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors' direction, occasionally even in the laboratory of the original investigator.
There are quite a few other studies which raise similar questions statistics about medical research; e.g.:
Did you read the study the parent post is talking about? A well funded laboratory, that was trying to not "just believe" research (as everyone else apparently does) was trying to replicate these results. If the science was good, all it should have taken is time and money (both of which they had enough of). And yet, 47 out of 53 celebrated results published in peer reviewed papers of the highest caliber could not be replicated . Let that sink for a minute before you reply.
> there is _no_ reason to say that all research cannot be trusted.
Ok. Your reason to state that research can be trusted is that it is eventually replicated (thus confirmed), or thrown out (thus shown false), is that right? (You didn't state that as your reason, so perhaps you have other ideas -- but that's a common one, so I'll reply to it).
Assuming that's the case -- do you have any idea what percentage of results are replicated? And how much time after official publication?
Because if it takes e.g. 30 years until a bad publication is discredited, and (as the data point given by the parent shows) there are areas in which 90% of the data apparently can be discredited when you try to replicate it -- then, there actually might be reason to distrust research in general, because at any given point in time, more than 90% of non-discredited published results are wrong.
See also http://saveyourself.ca/articles/ioannidis.php (and the paper it references). This situation is not science fiction. 90% un-replicatable publications is probably limited to very few subjects. But 50% overall in medicine and biology is totally believable.
Which is not to say science (the abstract idea / discipline / method) is wrong - it's right. It's just that the things we human practice and often call "science" is very, very far from the ideal of science. Ignore that at your own peril.
I would argue that if even those research papers could not be replicated, an anecdote is all but worthless.
Statistics are themselves misleading - there are whole books on the subject (oh no! an anecdote! better close your mind now). They are highly contextual, but the popular press excels are stripping that context and proclaiming absurd extremes. Anecdotes are excellent context, putting statistics into perspective.
Another idiotic strawman argument.
If science and anecdotes are equally bull to you, how do you make up your mind about things? Magic?
It's not science that is the problem. It's that biology considers a 95% confidence sufficient. Considering how many studies are done each year, this virtually guarantees incorrect results.
The reason they do that is that it's impossible to get better results, they just can not do enough trials. So they are stuck.
a) all data, everywhere in the world, including negative results, was published regardless of funding/publication.
b) someone actually looked at that data, normalized it, and used it to assess the real significance of every result, in a sane manner (e.g. by using a bayesian inference with some reasonably behaving universal prior).
Neither a, b will ever happen, and both are essential.
(note: publication of all data is not a sufficient requirement: if 20 independent labs each do the same random experiment, one of them is expected to have a 95% confidence, and when they publish all their data, it consists of that one experiment that seems legit. This _will_ and _already does_ happen by chance)
Let's take anything involving nutrition. Some challenges are: (1) people lie, (2) such studies can't be double-blind so placebo kicks in, (3) the statistical significance of short-term studies is zero, (4) you can't control all the variables, unless you lock those people in a cage and (5) most conclusions of such studies have the potential to confuse the cause and the effect.
But not all of science is like that. Just medicine.
Also what does "the statistical significance of short-term studies is zero" mean? I don't think it means what you think it means.
I would argue that short-term studies (for nutrition anyway) have little clinical significance, despite their statistical significance. I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless. This is the balance you have to strike with large sample sizes. With a large enough sample, small differences are likely to be statistically significant but the key is determining if the difference is worthwhile.
I blame bad science reporting for a lot of the anger you are feeling. Reporters don't seem to understand what they are reporting, and often the scientists themselves are (accidentally or on purpose) making it worse.
That's nice in theory, but does not happen for most published research.
> I'm in medicine, and I read papers all the time detecting a statistical difference between control and experimental groups, but the difference is so tiny that it's meaningless.
I'm trained in statistics. My ex was an MD. I used to read the NEJM for fun for a couple of years. Most of the results published are barely statistically significant for the small group they tested ("our sample included 40 caucasian females between the ages of 37 and 48, and we have a p value of 0.03" with no mention of the context which might make that p value meaningless - but let's assume they got that part right). And then, a couple of years later, some other study takes that result as absolute truth, but assumes it applies to any woman aged >30. And a couple of years later, it is assumed to be universal and speculated to apply to males as well.
Is your experience different?
> I blame bad science reporting for a lot of the anger you are feeling.
I blame tenure publishing requirements. While bad reporting certainly deserves its share of contempt, people these days do everything in order to meet the publishing requirements for tenure. Most stay away from outright fabrication, but otherwise every manipulation of the data that would make it fit for a higher caliber publication is being done as long as it is not outright fraudulent -- including dropping the background context so nicely exemplified by this xkcd comic http://xkcd.com/882/ . It often is the researchers doing the bad reporting with no outside help.
Or maybe you can trust those experiments that do get replicated successfully?