
Why aren't we doing the math? - ColinWright
http://timharford.com/2012/10/why-arent-we-doing-the-maths/
======
lotharbot
Cached text-only version (apparently his server can't handle the traffic):
[http://webcache.googleusercontent.com/search?q=cache:timharf...](http://webcache.googleusercontent.com/search?q=cache:timharford.com/2012/10/why-
arent-we-doing-the-maths/&hl=en&prmd=imvns&strip=1)

Key point: _"It seems that doctors may need a good deal of help interpreting
the evidence they are likely to be exposed to on clinical effectiveness, while
epidemiologists and statisticians need to think hard about how they present
their discoveries."_ This is based on the observation that improving "five-
year survival rates" may not actually mean the screening is helpful, it may
just mean that you're learning about an untreatable disease 6 years before it
kills you instead of 4 years before it kills you, but doctors don't seem to
understand this.

~~~
tel
(Also at:
[http://www.ft.com/cms/s/2/118169b6-1d74-11e2-869b-00144feabd...](http://www.ft.com/cms/s/2/118169b6-1d74-11e2-869b-00144feabdc0.html))

I have a thesis that the kind of thinking required to survive med school is
diametrically opposed to the kind of thinking required to do statistics well.
It's the "rote pattern matching" versus "mathetic language fluency" issue
that's at the heart of things like Papert's Constructivist learning theory[1]
and it really causes me to have little surprise at an article like this.
Doctors are (usually) viciously smart people who have to make a wide array of
difficult decisions daily, but to operate at that level requires an intuition
around a lot of cached knowledge, something I feel to be basically the
opposite of statistical thought.

I don't think this is unique, either. It's the heart of Fisher's program to
provide statistical tests as tools to decision-makers[2]. It's an undoubted
success in providing general defense against coincidences to a wide audience,
but it casts the deductive process needed in a pale light.

I think a principle component of the computer revolution is to provide more
people with better insight into mathetic thought. Papert focuses on
combinatorial examples in children in Mindstorms[3] but I think the next level
is understand information theory, distributions, and correlation on an
intuitive level. MCMC sampling went an incredible way to helping me to
understand these ideas and probabilistic programming languages are a great
step toward making these ideas more available to the common public, but we
also need great visualization (something far removed from today's often lazy
"data viz").

Ideally, things like means and variances will be concepts that are stronger
than just parameters of the normal distribution---which I feel is about as far
as a good student in a typical college curriculum statistics class in a
science or engineering major can go---but instead be tightly connected to
using distributions accurately when thinking of complex systems of many
interacting parts and using concentration inequalities to guide intuition.

I think the biggest driver of the recent popularization of Bayesian statistics
is that distributions as a mode of thought is something quite natural to the
human brain, but also something rather unrefined. People can roughly
understand uncertainty about an outcome, but have a harder time with
conjunctions or risk. How can we build tools that will teach people greater
refinement of these intuitions?

[1]
[http://en.wikipedia.org/wiki/Constructivism_(learning_theory...](http://en.wikipedia.org/wiki/Constructivism_\(learning_theory\))
[2]
[http://en.wikipedia.org/wiki/Statistical_Methods_for_Researc...](http://en.wikipedia.org/wiki/Statistical_Methods_for_Research_Workers)
[3] <http://www.amazon.com/dp/0465046746>

~~~
Evbn
Impossible. The whole lesson of statistics is that computing probabilities is
an intricate process. It will never be intuitive. I learn to throw a ball at a
target on intuition, but I will never learn to launch a rocket at Mars on
intution.

At best, it can become intuitive to ask the right skeptical questions when
being shown a claim.

~~~
tel
That's an interesting viewpoint that I'd love to discuss more. I disagree,
obviously, but want to know why you feel so strongly that statistical thought
is intuitively impossible?

I feel like it's closely related to combinatorial thought. To again steal an
example from Papert, he often talks about asking children to count the numbers
of possible pairs of colors of marbles given to them. With some formal
training it's easy to visualize and pare down to the right information, and
it's also easy to visualize the process. Given a variety of colored marbles, I
imagine you could easily estimate the magnitude of colored pairs possible.
Children cannot and must learn to think that way at a certain point.

In the same way, conceptualizing uncertain events in the larger space of
things that could happen and becoming familiar with the extents and
limitations of the casual models we all use is a way of thinking that takes a
great deal of effort (today) to come to have, but feels intuitive once you do
have it. I believe that there's nothing inherently impossible about teaching
it if the appropriate tools are available.

------
lifeisstillgood
If you read Ben goldacre and then Tim Harford you take the view the doctors
are acting rationally.

Pharm companies withold negative trials, bribe doctors to use their latest
expensive treatments, lie, cheat and steal.

So a new treatment you have never heard of and are asked to evaluate, in-
between appointments, will get shoved into the mental bin of "all new
treatments look good till the real world results start coming in" let's keep
doing the things I know do "only" kill 2/1000.

In fact things are so bad that one MP in England asking, yes just asking, the
government what they intend to do to stop pharma companies lying, is front
page news. <http://www.drsarah.org.uk/sarahs-blog/>

------
wiggins37
I'm a resident in Internal Medicine and actually it is not surprising that a
lot physicians tested got this wrong. The good news is that this topic, also
known as "lead time bias", is actually being taught pretty regularly now (in
at least my experience). So when I read this I knew exactly what the catch was
going to be with screening "A". So there is hope.

------
kiba
Expecting a doctor to know statistics and evaluate information like this is
like expecting a master computer hacker to perform surgery on you.

~~~
bo1024
Except that doctors actually _do_ have to evaluate information like this all
the time, and how they evaluate it determines what type of treatment they
choose.

~~~
mertd
Same goes for diagnosis, which is basically finding the maximum likelihood
culprit, given the evidence (symptoms, patient history, age, gender).

------
kespindler
This is one of the more asinine articles I've read lately. It draws a false
distinction between "saving lives" and "N-year survival rates", and then
lambasts doctors for not understanding this fanciful distinction.

It's a false distinction because we all eventually die. If we look at 100-year
survival rates for test B, everyone's dead, so the survival rate is 0. So
there is no such thing as "saving lives". There _only are_ N-year survival
rates.

Generally speaking, it takes a lot more than random musings of a random
journalist to up-end an incredibly wide-spread metric of effectiveness of
diagnostics and treatments...

~~~
matthewowen
Tim Harford is hardly a 'random journalist'; he's a well respected economic
journalist, and a fellow at Nuffield College, Oxford (which is notable within
the university for its focus on social sciences).

Besides which, it doesn't draw a false distinction between the two, at all -
you've misread the article if you think that it does. The point it makes is
that if a new screen improves five year survival rates it is very possible
that it _has not actually extended anyone's life_ , and that this is
problematic because many doctors are unable to realise this (or to appreciate
how significant the claims of other treatments are).

In fact, your claim that there's no such thing as "saving lives" is a true
case of asininity. Of course we die eventually - but some treatments can
_extend_ life, and that's why we use them.

Moreover, this is actually a very important topic in the UK at the moment
(note that this article was published in the Financial Times), because current
NHS reforms are putting much more commissioning power in the hands of
consortiums of GPs ('family doctors' in the US parlance) - so the ability of
doctors to evaluate the claims of efficacy made for different treatments and
screens is very important (and by the evidence of the research he cites,
somewhat lacking). It's hardly lambasting - it is now important that GPs can
understand these subtleties, whether they're happy about being in this
situation or not.

