Hacker News new | past | comments | ask | show | jobs | submit login
Why aren't we doing the math? (timharford.com)
56 points by ColinWright on Oct 28, 2012 | hide | past | favorite | 22 comments



Cached text-only version (apparently his server can't handle the traffic): http://webcache.googleusercontent.com/search?q=cache:timharf...

Key point: "It seems that doctors may need a good deal of help interpreting the evidence they are likely to be exposed to on clinical effectiveness, while epidemiologists and statisticians need to think hard about how they present their discoveries." This is based on the observation that improving "five-year survival rates" may not actually mean the screening is helpful, it may just mean that you're learning about an untreatable disease 6 years before it kills you instead of 4 years before it kills you, but doctors don't seem to understand this.


(Also at: http://www.ft.com/cms/s/2/118169b6-1d74-11e2-869b-00144feabd...)

I have a thesis that the kind of thinking required to survive med school is diametrically opposed to the kind of thinking required to do statistics well. It's the "rote pattern matching" versus "mathetic language fluency" issue that's at the heart of things like Papert's Constructivist learning theory[1] and it really causes me to have little surprise at an article like this. Doctors are (usually) viciously smart people who have to make a wide array of difficult decisions daily, but to operate at that level requires an intuition around a lot of cached knowledge, something I feel to be basically the opposite of statistical thought.

I don't think this is unique, either. It's the heart of Fisher's program to provide statistical tests as tools to decision-makers[2]. It's an undoubted success in providing general defense against coincidences to a wide audience, but it casts the deductive process needed in a pale light.

I think a principle component of the computer revolution is to provide more people with better insight into mathetic thought. Papert focuses on combinatorial examples in children in Mindstorms[3] but I think the next level is understand information theory, distributions, and correlation on an intuitive level. MCMC sampling went an incredible way to helping me to understand these ideas and probabilistic programming languages are a great step toward making these ideas more available to the common public, but we also need great visualization (something far removed from today's often lazy "data viz").

Ideally, things like means and variances will be concepts that are stronger than just parameters of the normal distribution---which I feel is about as far as a good student in a typical college curriculum statistics class in a science or engineering major can go---but instead be tightly connected to using distributions accurately when thinking of complex systems of many interacting parts and using concentration inequalities to guide intuition.

I think the biggest driver of the recent popularization of Bayesian statistics is that distributions as a mode of thought is something quite natural to the human brain, but also something rather unrefined. People can roughly understand uncertainty about an outcome, but have a harder time with conjunctions or risk. How can we build tools that will teach people greater refinement of these intuitions?

[1] http://en.wikipedia.org/wiki/Constructivism_(learning_theory... [2] http://en.wikipedia.org/wiki/Statistical_Methods_for_Researc... [3] http://www.amazon.com/dp/0465046746


Med/Law = pattern recognition machines to detect statistical regularities. You show them a plane. Then give them another object and ask them whether or not it is a plane.

Math/Eng/Science = use of pattern recognition over a multiple of composable machines to create something new. You show them a combustion engine, steel frames, gears and vulcanised rubber wheels, then they connect it to the invention of bikes/trains to make a car.


Or "pattern recognition" versus "model building".


Medicine took a huge leap forward around 1800 or so when people started collecting statistics on what worked and what didn't. Evidently the doctors' intuition was very, very wrong.


Impossible. The whole lesson of statistics is that computing probabilities is an intricate process. It will never be intuitive. I learn to throw a ball at a target on intuition, but I will never learn to launch a rocket at Mars on intution.

At best, it can become intuitive to ask the right skeptical questions when being shown a claim.


That's an interesting viewpoint that I'd love to discuss more. I disagree, obviously, but want to know why you feel so strongly that statistical thought is intuitively impossible?

I feel like it's closely related to combinatorial thought. To again steal an example from Papert, he often talks about asking children to count the numbers of possible pairs of colors of marbles given to them. With some formal training it's easy to visualize and pare down to the right information, and it's also easy to visualize the process. Given a variety of colored marbles, I imagine you could easily estimate the magnitude of colored pairs possible. Children cannot and must learn to think that way at a certain point.

In the same way, conceptualizing uncertain events in the larger space of things that could happen and becoming familiar with the extents and limitations of the casual models we all use is a way of thinking that takes a great deal of effort (today) to come to have, but feels intuitive once you do have it. I believe that there's nothing inherently impossible about teaching it if the appropriate tools are available.


"Impossible" seems a broad claim - I don't see why it shouldn't be possible to put the information in a form, possibly decorated with details from a rigorous analysis, that makes pattern matching work. If the pattern matching is otherwise proving effective (itself an empirical claim, to be sure), we should be careful about teaching doctors not to pattern match.


If you read Ben goldacre and then Tim Harford you take the view the doctors are acting rationally.

Pharm companies withold negative trials, bribe doctors to use their latest expensive treatments, lie, cheat and steal.

So a new treatment you have never heard of and are asked to evaluate, in-between appointments, will get shoved into the mental bin of "all new treatments look good till the real world results start coming in" let's keep doing the things I know do "only" kill 2/1000.

In fact things are so bad that one MP in England asking, yes just asking, the government what they intend to do to stop pharma companies lying, is front page news. http://www.drsarah.org.uk/sarahs-blog/


I'm a resident in Internal Medicine and actually it is not surprising that a lot physicians tested got this wrong. The good news is that this topic, also known as "lead time bias", is actually being taught pretty regularly now (in at least my experience). So when I read this I knew exactly what the catch was going to be with screening "A". So there is hope.


Expecting a doctor to know statistics and evaluate information like this is like expecting a master computer hacker to perform surgery on you.


Except that doctors actually do have to evaluate information like this all the time, and how they evaluate it determines what type of treatment they choose.


Same goes for diagnosis, which is basically finding the maximum likelihood culprit, given the evidence (symptoms, patient history, age, gender).


This is pretty basic, relevant math. Don't most degree programs in pre-med or biology require at least an introductory statistics class?


It's not that basic, it's stochastics for beginners, which in my biology undergrad was taught in Maths II, and in my sister's med. undergrad was taught in the last year I think, and she still struggles with p-values (especially how to interpret them, and adjust for cut-offs) in her PhD.


"doctor"

"master computer hacker"

Do even most computer programmers know statistics and how to meaningfully evaluate data?


Sadly, current NHS reforms mean that it is increasingly necessary that they can do so - broadly speaking, the healthcare system is being significantly decentralised, and much more commissioning power is being put in their hands.

And, of course, you already have this situation in the USA, no? How does your doctor decide which treatments they ought to prescribe for you? How does your doctor decide which screens are useful, which are not?


This is one of the more asinine articles I've read lately. It draws a false distinction between "saving lives" and "N-year survival rates", and then lambasts doctors for not understanding this fanciful distinction.

It's a false distinction because we all eventually die. If we look at 100-year survival rates for test B, everyone's dead, so the survival rate is 0. So there is no such thing as "saving lives". There _only are_ N-year survival rates.

Generally speaking, it takes a lot more than random musings of a random journalist to up-end an incredibly wide-spread metric of effectiveness of diagnostics and treatments...


Tim Harford is hardly a 'random journalist'; he's a well respected economic journalist, and a fellow at Nuffield College, Oxford (which is notable within the university for its focus on social sciences).

Besides which, it doesn't draw a false distinction between the two, at all - you've misread the article if you think that it does. The point it makes is that if a new screen improves five year survival rates it is very possible that it has not actually extended anyone's life, and that this is problematic because many doctors are unable to realise this (or to appreciate how significant the claims of other treatments are).

In fact, your claim that there's no such thing as "saving lives" is a true case of asininity. Of course we die eventually - but some treatments can extend life, and that's why we use them.

Moreover, this is actually a very important topic in the UK at the moment (note that this article was published in the Financial Times), because current NHS reforms are putting much more commissioning power in the hands of consortiums of GPs ('family doctors' in the US parlance) - so the ability of doctors to evaluate the claims of efficacy made for different treatments and screens is very important (and by the evidence of the research he cites, somewhat lacking). It's hardly lambasting - it is now important that GPs can understand these subtleties, whether they're happy about being in this situation or not.


> It's a false distinction because we all eventually die... There _only are_ N-year survival rates.

The metric he's discussing is death rate from the disease in question. So although we all eventually die, we don't all die from, for example, pancreatic cancer. Thus, "does it reduce the death rate by pancreatic cancer" is a notably better metric than "does it reduce the total incidence of death over the next five years."

As to whether a random journalist can come up with a better metric of success than the medical profession, look at your own response to the proposal. You misunderstood, you got angry, you questioned the credentials of the source. If the medical profession in general has that attitude to outside criticism, then yes, it seems likely that they would not be using the optimum statistical measures.


If you don't like this particular article, Peter Norvig goes over the same topic (problem?) much more rigorously: http://norvig.com/experiment-design.html


^ This.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: