

Why Science is Failing Us - quasistar
http://www.wired.com/magazine/2011/12/ff_causation/all/1

======
mindstab
I don't quite know how to adequatly articulate my displeasure with this
article but lets try.

The article basically seems to be relying on some philosophy and muddy and
different definitions in different contexts (philosophy vs science vs standard
usage) of words like "fact". Then it talks about how things are getting more
complex and we're spending more effort to learn things now than we used to
because "we know all the easy stuff". And seems to conclude that we'll still
be no better than religious shamanistic people once we "know all there is to
know" and it still won't do us a lot of good.

It seems to be advocating give up on science now, with some rational like
"while we're ahead".

I honestly don't get it. It seems like cloudy wooly thinking, bad arguments.

Sure, things are getting more complex and will continue to, but that doesn't
mean we should give up, or that "it's mysteries all the way down". Every year
we learn more and fix more problems. And we have to and always have had to
make a lot of mistakes in the process. The author seems to think we're making
more mistakes now and that's an indication the game is almost up.

I disagree, medical science is still churning out amazing breakthroughs, like
HIV and cancer vaccines this year. And physics is still coming up with amazing
things.

Just because it's getting harder doesn't mean we should stop or that we'll hit
a wall and be able to go no further (and if we can see that wall coming we
might want to think about stopping prematurely?)

Everytime we thought we'd learned everything we've been able to push on and
learn more, discover more depth, and use it more to our advantage. I don't
strictly speaking see why that has to stop just because it's getting harder.
At least any time soon. Each new level also gives us better tools to work
with.

And there have always been people saying we know enough now, or it's getting
harder so lets stop now. And some have, and many haven't and that's why we
still have progress. This is a age old endless reoccurring trend and bares the
same ignoring it has always gotten. Or you can step off the train of progress
and be left behind.

I do not think science is failing us at all in anyway. I think this article is
poor on many standards.

~~~
rayiner
The article makes a lot more sense if you think of "science" not as the
idealized process, but the actual process as practiced by people at Pfizer,
Merck, etc. Read the Hume reference not as a criticism of the scientific
method, but rather as a reminder of something we already know (correlation !=
causation), but that we in practice assume all the time in order to apply
science to certain problems.

He's saying not that the method is bad, but that things are getting complex
enough, at least in medicine, where we're hitting some limits on how easily we
can practice science to the necessary level of rigor and precision.

~~~
polyfractal
To nitpick, Pfizer, Merck et. al. don't actually do science. What they do is
more akin to spraying buckshot into the bushes and hoping to hit a furry
animal hiding somewhere inside.

Not that this is a particularly bad approach. It worked amazingly well for the
last few decades and has churned out a lot of blockbuster drugs.

It is, however, starting to fall apart as all the low hanging fruit has been
picked. It's time for the big pharma's to start doing science again. The
problem is that big pharma is thoroughly infatuated with quarterly stock
prices and not developing new drugs. They are cannibalizing themselves in an
effort to keep stock prices up and profits growing.

It's better to think of big pharma as remarkably successful marketing engines,
not medical science companies.

For instance, I know of many scientists that work for big pharma companies
which do not have access to _any_ of the scientific literature (past the
freely available abstracts). Management will not allow them to purchase these
articles.

Consider that for a moment. The companies producing your drugs are not even
remotely up to speed on cutting edge biological knowledge.

*Obligatory "I'm in biology so I'm not making this stuff up" disclaimer.

~~~
epistasis
Let me chime in as a biologist that thinks you've hit it right on the head.
This is _precisely_ the way to think about the large pharmaceutical companies:
marketing companies. They've fired most of their scientists, instead buying
positive research results from smaller companies, and market the hell out of
the few compounds that have survived the clinical trial lottery.

More than that, the pharmas are absolutely resistant to embracing technology
that can save them; they discard scientists that are driving their respective
fields forward, and resist the notion that understanding the cell as an
information system can provide better returns on their trials.

There are exceptions, when Genentech was run by Art Levinson (a scientist) he
was capable of discerning science from bullshit and was an effective CEO.
However, I would sell any stock in a pharmaceutical not run by a scientist;
such a company may be able to post short-term returns but they're only doing
that by selling off any possibility of future success.

------
tokenadult
This is an important article with well chosen examples. But I think the
headline points to the wrong "cause" of failure. Scientists, the directors of
science research funding projects, and the general public can better
understand what we know and what we don't know about causation from
correlation if science teachers and journalists do a better job. For a long
time, members of the journalistic community and members of the general public
have been overinterpreting tentative scientific findings,

<http://norvig.com/experiment-design.html>

and if we learn the lessons of how to interpret research findings more
cautiously, we can all do our part to guide further research better.

As the author of the submitted article points out, "This doesn't mean that
nothing can be known or that every causal story is equally problematic. Some
explanations clearly work better than others, which is why, thanks largely to
improvements in public health, the average lifespan in the developed world
continues to increase. (According to the Centers for Disease Control and
Prevention, things like clean water and improved sanitation—and not
necessarily advances in medical technology—accounted for at least 25 of the
more than 30 years added to the lifespan of Americans during the 20th
century.) Although our reliance on statistical correlations has strict
constraints—which limit modern research—those correlations have still managed
to identify many essential risk factors, such as smoking and bad diets."

So with caution about assuming causation where the data cannot reliably show
causation,

<http://escholarship.org/uc/item/6hb3k0nz>

the huge task of biomedical research can still go forward, eventually yielding
other findings that can improve health or longevity compared to today's
baseline.

AFTER EDIT: The question posed in the first reply below is interesting. One
reason that biomarker interventions are tried more often than "hard endpoint"
interventions is simply that they are faster and easier. To really check
carefully for hard endpoints--reduced mortality and morbidity, for a medical
treatment--takes time in a clinical trial. Sometimes an effective on a
biomarker, for example serum cholesterol, can be observed right away, but if
the subjects in a study are at an age at which few subjects die from any
cause, it can be a long while before a study reveals which treatments actually
increase rather than decrease the risk of death.

The case of the drug rimonabant,

<http://en.wikipedia.org/wiki/Rimonabant>

which had reasonably strong support from animal experiments as an antiobesity
drug, is instructive. Studies of human subjects after the drug was approved in
Europe revealed a huge increase in suicidal risk among patients taking
rimonabant,

[http://www.pharmacist.com/AM/Template.cfm?Section=Pharmacy_N...](http://www.pharmacist.com/AM/Template.cfm?Section=Pharmacy_News&template=/CM/ContentDisplay.cfm&ContentID=24206)

and eventually approval of the drug in Europe was withdrawn, and the drug was
withdrawn from the market by its manufacturer, before rimonabant was ever
approved in the United States.

~~~
aidenn0
Also, it's not a failure to e.g. notice that high levels of biomarker X are
correlated with disease Y, so let's try lowering X and see what happens.
That's just the next step in determining causality.

What happens is doctors read the correlation and start massively prescribing
biomarker targeted remedies (such as vitamin B) before any causation is shown.
Why does this happen?

~~~
alextp
> Why does this happen?

I'd guess it's because promoting this possible causal link as a cure for
diseases is pretty much the easiest way to get it tested---where else will you
find 25k volunteers?

As always, when people are involved, the reason for failure seems to be in the
incentive structure.

------
refurb
Wow, that article was a little annoying.

Correct me if I'm wrong, but I get the impression that he thinks our current
approach to science is wrong due to our tendency to attribute cause and effect
to things where we have no conclusive proof of cause and effect?

Well, if he has a better suggestion on how to approach research, I'm all ears!

First off, I don't think ANYONE who was involved with the development of
torcetrapib thought it was a "slam dunk". The success rate of drugs that have
reached phase III is only slightly north of 50%.

Second, there is no way you could possibly figure out all the effects a
particular drug has on the human body. You'd be doing research for the next
100 years and you still wouldn't come close. So what we do is we come up with
a hypothesis (high HDL is good), we gather evidence in the most efficient
manner we can (other drugs that raise HDL help prolong life in humans and
animals), then we move forward with our BEST GUESS. That's how science works,
you create a hypothesis, then test it.

Are our hypotheses wrong sometimes? Of course. Do we learn something from the
failures? Yes. Trying a being successful 10% of the time is far better than
not trying at all and being successful 0% of the time.

Merck's CETP inhibitor is in phase III right now and there is a chance that it
will fail too. And I don't think any scientists feels that high HDL is the
cause of reduced cardiac risks. A more accurate description would be to say
"High HDL is associated with reduced cardiac risks, this drug increases HDL
levels, so it stands a chance of reducing cardiac risks".

I think the author does a bad job of describing how scientists approach their
work. If anything a scientist would be the first to call out a claim that
something _causes_ something else. That's how their trained!

~~~
shubble
The articles criticism I think stems from the same feeling of discomfort I got
when I understood a bit better how drugs are developed.

As a culture I think we have a certain faith in the medical system to save us
from death, the same way previous generations looked to priests to save the
spirit.

The inference you mention 'high HDL is associated with low cardiac arrest,
therefore a drug that increases HDL might help is more vague than a layman
expects. It's like saying 'I want a safe car, and German tend to be safe,
therefore I will buy a German' - it's valid in the absence of a real
understanding of how to specify and select a safe car, but it's more vague
than you'd be comfortable with. You expect an engineering company to be able
to specify a safe car based on deep knowledge. But because our understanding
of the disease, and of what different chemicals can do is incomplete, a drug
company can't do that. Instead, they follow as many hints as they can to
select a chemical that might work, and then advance it through a series of
progressively more expensive trials until they are pretty sure it does more
good than harm.

That's a valid way of doing things, and at the moment it's all we can do. But
it's not what a layman imagines, or certainly not what this one imagined. It
undermines our sense of control - our sense that we are immortal and can get
on with making an angry birds clone to get rich because there will be plenty
of time to do the projects we want to after the payoff - it's not like we are
going to die of heart disease, science has our back on that one!

Or maybe I'm generalizing my personal feelings too much?

~~~
mattgreenrocks
No, you're onto something here. Modern medicine is not all powerful. It does
some things very well, and we live much longer because of it, but there is
still so much we do not understand. I'm thankful that there are those that put
the time and money into this research that furthers our comprehension.

However, there are plenty of conditions you can develop where modern medicine
is only able to contain the symptoms, rather than fix the problem causing
them. Autoimmune conditions (such as MS) come to mind here.

------
kenjackson
I think scientists see this as a success. You get data and you revise your
hypothesis. You get more data and you revise it again.

A lot of people want science to be like politics. They want you to pick a side
and stick to it regardless of the data.

IMO, when conventional wisdom isn't at least occassionally overturneed --
that's when I'll begin to think science is failing us.

~~~
SoftwareMaven
The problem is we are publishing the wrong answer too many times. The bias in
journals to publish positive responses (not to mention the drive to create
company profits!) means people aren't rewarded for finding out they are wrong.

Fix the incentives, and I think we'll have better science.

~~~
Joakal
The incentives are research companies offering money for successful
researchers. Since having failed attempts makes a researcher appear less
successful to businesses, they hide it.

Not sure how you would 'fix' those research companies.

~~~
Karellen
One possible way is that, if you are in the position to make decisions based
on, or make use of, the research results of others (e.g. you're the FDA, or
the journal Nature) you require that companies publish/register the
methodology of any research they intend to conduct before it begins. If
someone submits/relies on the results of a study which was not pre-registered,
the data is ignored.

Any by pre-registering, you can follow up on research that has been silently
"forgotten about". If too much research is forgotten about, you stop trusting
the results you do hear about.

~~~
Joakal
It sounds like a good idea!

But: What if they instead do research secretly. Then on research success, they
publicly notify the journal of what they intend to do and that it will take 5
months to research. Then 6 months later (an extra month to look plausible),
they publish their successful data.

------
winestock
The advice of pnathan, elsewhere in this thread, is good. This _is_ a better
article than what I've come to expect from Wired.

The main point of the article is that scientists have exhausted the low-
hanging fruit of useful correlations and are now grasping at the more dubious
correlations. The author claims that things are complicated by the concept of
causation.

He cites David Hume: "...causes are a strange kind of knowledge. This was
first pointed out by David Hume, the 18th-century Scottish philosopher. Hume
realized that, although people talk about causes as if they are real
facts—tangible things that can be discovered -- they’re actually not at all
factual. Instead, Hume said, every cause is just a slippery story, a catchy
conjecture, a 'lively conception produced by habit.' When an apple falls from
a tree, the cause is obvious: gravity. Hume’s skeptical insight was that we
don’t see gravity -- we see only an object tugged toward the earth. We look at
X and then at Y, and invent a story about what happened in between. We can
measure facts, but a cause is not a fact -- it’s a fiction that helps us make
sense of facts."

It's been a while since I've taken philosophy, but Hume's skepticism of
causality is itself a story by its own criteria.

~~~
saulrh
A slight amendment to your statements: in _some fields of research_ , we've
exhausted the low-hanging fruit, and "low-hanging" is defined differently for
every field. In mathematics, for example, we ran out of obvious things in the
1700s, but there are still "low-hanging fruit" because the cost of investment
is so low. My field, robotics, is currently new enough that we're still
finding things that are obvious in hindsight. In pharmaceuticals, things are
pretty easy to come up with, but really ridiculously expensive to test and
verify the safety of. Making sweeping statements about "low-hanging fruit"
doesn't work.

~~~
giardini
"Making sweeping statements about 'low-hanging fruit' doesn't work."

Sure it does! Extend the analogy a bit by letting the tree be a living thing.
That today you picked some fruit doesn't mean that tomorrow a new pear (apple,
guava, grapefruit, etc.) won't grow where you harvested.

And when the tree dies, there's always that "making lemonade from lemons"
metaphor... (OK, so that's engineering, not science, but it will keep you busy
and productive).

------
zasz
"Another meta review, meanwhile, looked at the 49 most-cited clinical research
studies published between 1990 and 2003. Most of these were the culmination of
years of careful work. Nevertheless, more than 40 percent of them were later
shown to be either totally wrong or significantly incorrect."

If science was really failing us, I don't see how we would have managed to
retract those incorrect studies. It feels like the writer had no bigger point
than "biomedical research is hard, let's go shopping." It's sensationalist to
consider science a failure every time it makes a mistake.

I thought the Hume references were pretty bad, too. If you read what he says,
he questions the existence of relations such as "A causes B" and prefers to
phrase them as, "In the past, we have observed A-like events are always
correlated with B-like events." For practical purposes, that's enough to
behave as if causality "really" exists. You just have to avoid mixing up
causality with mere correlation, which every good scientist already knows.

------
blix
This article misses the mark completely, both by extrapolating medicine to
science as a whole, and by attacking the idea of causation rather than the
sketchy practices of medicine.

All of the examples are pulled from medicine, notorious for its lack of
experimental rigor. To say that "Science has failed us" implies that either
medicine is the the only important science or that all science is equally as
sloppy, which is pretty insulting to a scientist in any harder field.

His focus on causation is even more misguided. The very purpose of science is
to understand the way the world works; to understand what causes what. To
attack the idea of causation is to attack the very idea of science, and in
turn all of the advances it's brought about over the past 300 years. Beyond
that, we implicitly accept causation in almost every aspect of our lives
(Pressing the space bar causes a space to appear, etc). Certainly causes can't
be 'seen' like facts, but to suggest that this trivializes them, or somehow
makes them less useful is nonsense (and, for what it's worth, is total
misreading of Hume).

Complexity isn't a valid reason either. Some very well understood systems are
incredibly complex (look at the computer you are using now). What is true is
that like all other humans, scientists make mistakes. We often make the
incorrect causal links or are influenced by our biases. This is why
experiments exist (instead of pure data collection); to make sure the causes
we have assumed are correct. To point to a couple of experiments with an
unexpected result and then say that all of science has failed isn't even a
little bit right.

~~~
bmahmood
Agree that the author's extrapolations of the problems of medicine to science
are cringe-worthy. The focus on medicine though makes me think it was just a
bad editorial decision for the headline.

That said, I think he is correct in his critique of medicine/pharma. The cost
of drug development has gone astronomically high these past decades, with
billion dollar pipelines to account for the cost of failure.

The pharma drug development model has not really evolved beyond a lottery
system of testing random compounds to treat diseases, and going back/forth
until the right permutation of a compound is found. This might have worked
before for initial "easy" diseases (that had easy drug targets, or single gene
mutations), but the problems we face now (Alzheimers, Cancer) are too complex
for our lottery-based drug development system.

~~~
refurb
I would disagree that the pharma drug development model hasn't evolved. There
are many recent advances that have helped improve drug development (human
cells used in pre-clinical screening, more advanced clinical trial design,
etc).

What I think has really changed is the cost of failure. The best example I can
think of is the discovery of benzodiazepines (the drug class that includes
Valium). The first benzodiazepine (chlordiazepoxide, Librium) was discovered
in 1957 (we're talking, the FIRST set of pre-clinical tests) and it was on the
market in 1960. 3 years from the first tests to market.

Nowadays, you'd be lucky to get to market in 15 years. A great example is
Qutenza. The product is nothing more than a patch that contains a very high
level of capsaicin (the stuff that makes peppers hot). When you apply it to
the skin, it can reduce the pain that sticks around after an attack of
shingles. I can't think of a product with fewer safety issues, yet it took 10
YEARS for the company to get FDA approval.

This is due to a combination of increased FDA scrutiny around safety along
with a high standard for efficacy (i.e. we don't care if your drug reduces
cholesterol, we want you to prove it reduces heart attacks). So in the past,
when a smaller, shorter trial was sufficient for FDA approval, you could take
a promising drug all the way to the FDA without a lot of expense. Not so
anymore.

------
thesash
I find this article troubling for two reasons: it fails to back up with
evidence some of its boldest claims, and it suffers from the same problems
presented by it's own argument.

Claims like this one:

> "First, all of the easy causes have been found, which means that scientists
> are now forced to search for ever-subtler correlations, mining that mountain
> of facts for the tiniest of associations."

May be true, but the author presents no evidence to support them, with
relevant studies, articles, etc..

The second, much more troubling problem however, is that the argument suffers
fromt the very problem it presents! The author's conclusion that the returns
on scientific research are diminishing due to an inherent flaw in conclusions
being drawn from correlations-- is _itself_ a correlation. He's correlating
the increasing cost of research to the increasing difficulty of finding new
correlations in the data.

There are other, simpler, less circular and philosophical explanations for why
the returns on pharmaceutical research have decreased, such as increasingly
strict regulations and fear of risk on the part of regulatory organizations.
See this TED talk, where Juan Enriquez talks about these issues:
[http://www.tedmed.com/videos-
info?name=Juan_Enriquez_at_TEDM...](http://www.tedmed.com/videos-
info?name=Juan_Enriquez_at_TEDMED_2011&q=updated&year=2011)

------
ggwicz
Science isn't failing us. Big bureaucracies getting in the way of science are
failing us. Even just to experiment with semi-controlled drugs, for example,
is a massive headache and ungodly expensive. Everybody wants to get into
journals and paid by government institutes, so a lot of science being done is
very safe and not venturing out to the controversial as much. Studies are
being funded by corporations looking to get some cooked data to support their
bullshit.

Science has never failed us, as science is inherently just human curiosity.
The continuing structural growth and big bureaucratic developments that many
governments, schools, and businesses are implementing are failing us.

 _"Bureaucracy is the art of making the possible impossible."_ \- Javier
Pascual

Science can't fail. It only illuminates. But a lot of shady assholes run this
place, and the last thing they want is a light shone on them.

------
dean
This is a very wrong-headed article. The author seems to think that science
should be able to get the right answers on the first try, and that if we
can't, it's somehow a failure of science and any attempts at understanding
should be abandoned.

I think he has a basic misunderstanding of science. He doesn't realize that
"made up stories" to explain how things work are just a starting point to
understanding. They have to be tested and revised and re-tested until we come
up with an explanation that reliably predicts how something works. And
failures are an integral part of the process. Failures advance understanding.

It reminds me of the quote by Thomas Edison, after a thousand failed
experiments, "We now know a thousand ways not to build a light bulb".

------
polychrome
This article does a great job of pointing out how science has limited it's
thinking. It's not that science is wrong or is going to perish, it's simply
needs to open it's perspective more.

Take for example the first time you came up with a cool new product. You took
it to a VC/someone who's done it before, and they ask you about your market,
price, revenue etc. Science is still creating cool new products, not paying
attention to everything else around it.

Here's another good example: wind farms. We've been creating massive new wind
mills that are more efficient bigger, etc etc. Have we ever looked at how to
install them in such a fashion that they become more efficient as a team
rather than an individual? And have we looked at how wind patterns change
because of them?

------
adharmad
The title itself is really bad - trial and error is the only way science
works. You observe a cause and an effect, propose a theory, and tweak it based
on more causes/effects. There are very few instances in the history of science
where someone without any contact with actual experiment sat in a closed room
and came up with an a theory that was eventually proved correct. If the author
of the article was alive in the 1920s-1950s, and observed the chaotic
scientific development of Quantum mechanics, he would have the exact same
opinion that he has of the current state of medicinal research.

I am curious to see if the author has any actual suggestions on how to do
science.

------
pnathan
That's one of the best Wired articles I've read in a long time. I recommend
reading it.

------
marshray
I was helping do data analysis at a spine surgery clinic in the 90s. I
remember when that healthy-person MRI disc study came out. It was interesting,
but I don't think it slowed us down one bit. :-)

------
6ren
The author seems surprised that we don't understand everything. Feynman:
Nature's imagination is greater than your imagination.

A more interesting limit is relationships that cannot be understood in
isolation. When these exceed our working memory, we can't perform our usual
trick of hierarchical abstraction to look at one part or one aspect at a time.
Perhaps _that_ could be our limit of intuitive understanding, unless we come
up with a fundamentally new way of understanding complexity.

------
thisisnotmyname
The standard test for causality (at least in biology, where I work) is to test
for rescue. You first establish that under conditions a, event b happens. You
then reverse a, and observe b returning to normal. This, followed by controls
demonstrating that you're only changing a and are actually measuring b serve
as a stringent test for causation.

------
jbjohns
pg one wrote in an essay about how if you manage to stumble onto something
tabu you probably found something interesting (heavily paraphrased). A lot of
people are seriously offended at this article.... just sayin'.

------
guscost
Odds of this article containing the answer? Not good.

------
saturn
> At any given time, about 10 percent of Americans are completely
> incapacitated by their lumbar regions

How can this possibly be true? _Completely_ incapacitated, ie bedridden and
immobile? Surely the country would be in near collapse if 30+ million of its
inhabitants were randomly bedridden at any one time just by that one medical
issue.

~~~
giardini
Thank goodness for aspirin, ibufprofen and acetaminopen!

It is an overstatement. But if you merely ask around, especially of men, you
will likely find a majority of them have had serious back pain and have some
trepidation that it might, at any time, for little or no reason, return.

I've got to try to get my socks on, now.

------
Trey-Jackson
TLDR

Way too long, meandering, full of anecdotes.

How is it at all surprising that trying to "fix" problems with the body is
uber-complicated?

Plus, there's the obvious missing bigger point - all these companies are
trying to find a solution that is a pill, as opposed to changing the
underlying problem: bad food, bad environment, bad physical conditioning, etc.
Billions spent on finding pills, very little money in solving the root
causes...

