Key point: "It seems that doctors may need a good deal of help interpreting the evidence they are likely to be exposed to on clinical effectiveness, while epidemiologists and statisticians need to think hard about how they present their discoveries." This is based on the observation that improving "five-year survival rates" may not actually mean the screening is helpful, it may just mean that you're learning about an untreatable disease 6 years before it kills you instead of 4 years before it kills you, but doctors don't seem to understand this.
I have a thesis that the kind of thinking required to survive med school is diametrically opposed to the kind of thinking required to do statistics well. It's the "rote pattern matching" versus "mathetic language fluency" issue that's at the heart of things like Papert's Constructivist learning theory and it really causes me to have little surprise at an article like this. Doctors are (usually) viciously smart people who have to make a wide array of difficult decisions daily, but to operate at that level requires an intuition around a lot of cached knowledge, something I feel to be basically the opposite of statistical thought.
I don't think this is unique, either. It's the heart of Fisher's program to provide statistical tests as tools to decision-makers. It's an undoubted success in providing general defense against coincidences to a wide audience, but it casts the deductive process needed in a pale light.
I think a principle component of the computer revolution is to provide more people with better insight into mathetic thought. Papert focuses on combinatorial examples in children in Mindstorms but I think the next level is understand information theory, distributions, and correlation on an intuitive level. MCMC sampling went an incredible way to helping me to understand these ideas and probabilistic programming languages are a great step toward making these ideas more available to the common public, but we also need great visualization (something far removed from today's often lazy "data viz").
Ideally, things like means and variances will be concepts that are stronger than just parameters of the normal distribution---which I feel is about as far as a good student in a typical college curriculum statistics class in a science or engineering major can go---but instead be tightly connected to using distributions accurately when thinking of complex systems of many interacting parts and using concentration inequalities to guide intuition.
I think the biggest driver of the recent popularization of Bayesian statistics is that distributions as a mode of thought is something quite natural to the human brain, but also something rather unrefined. People can roughly understand uncertainty about an outcome, but have a harder time with conjunctions or risk. How can we build tools that will teach people greater refinement of these intuitions?
Math/Eng/Science = use of pattern recognition over a multiple of composable machines to create something new. You show them a combustion engine, steel frames, gears and vulcanised rubber wheels, then they connect it to the invention of bikes/trains to make a car.
At best, it can become intuitive to ask the right skeptical questions when being shown a claim.
I feel like it's closely related to combinatorial thought. To again steal an example from Papert, he often talks about asking children to count the numbers of possible pairs of colors of marbles given to them. With some formal training it's easy to visualize and pare down to the right information, and it's also easy to visualize the process. Given a variety of colored marbles, I imagine you could easily estimate the magnitude of colored pairs possible. Children cannot and must learn to think that way at a certain point.
In the same way, conceptualizing uncertain events in the larger space of things that could happen and becoming familiar with the extents and limitations of the casual models we all use is a way of thinking that takes a great deal of effort (today) to come to have, but feels intuitive once you do have it. I believe that there's nothing inherently impossible about teaching it if the appropriate tools are available.
Pharm companies withold negative trials, bribe doctors to use their latest expensive treatments, lie, cheat and steal.
So a new treatment you have never heard of and are asked to evaluate, in-between appointments, will get shoved into the mental bin of "all new treatments look good till the real world results start coming in" let's keep doing the things I know do "only" kill 2/1000.
In fact things are so bad that one MP in England asking, yes just asking, the government what they intend to do to stop pharma companies lying, is front page news.
"master computer hacker"
Do even most computer programmers know statistics and how to meaningfully evaluate data?
And, of course, you already have this situation in the USA, no? How does your doctor decide which treatments they ought to prescribe for you? How does your doctor decide which screens are useful, which are not?
It's a false distinction because we all eventually die. If we look at 100-year survival rates for test B, everyone's dead, so the survival rate is 0. So there is no such thing as "saving lives". There _only are_ N-year survival rates.
Generally speaking, it takes a lot more than random musings of a random journalist to up-end an incredibly wide-spread metric of effectiveness of diagnostics and treatments...
Besides which, it doesn't draw a false distinction between the two, at all - you've misread the article if you think that it does. The point it makes is that if a new screen improves five year survival rates it is very possible that it has not actually extended anyone's life, and that this is problematic because many doctors are unable to realise this (or to appreciate how significant the claims of other treatments are).
In fact, your claim that there's no such thing as "saving lives" is a true case of asininity. Of course we die eventually - but some treatments can extend life, and that's why we use them.
Moreover, this is actually a very important topic in the UK at the moment (note that this article was published in the Financial Times), because current NHS reforms are putting much more commissioning power in the hands of consortiums of GPs ('family doctors' in the US parlance) - so the ability of doctors to evaluate the claims of efficacy made for different treatments and screens is very important (and by the evidence of the research he cites, somewhat lacking). It's hardly lambasting - it is now important that GPs can understand these subtleties, whether they're happy about being in this situation or not.
The metric he's discussing is death rate from the disease in question. So although we all eventually die, we don't all die from, for example, pancreatic cancer. Thus, "does it reduce the death rate by pancreatic cancer" is a notably better metric than "does it reduce the total incidence of death over the next five years."
As to whether a random journalist can come up with a better metric of success than the medical profession, look at your own response to the proposal. You misunderstood, you got angry, you questioned the credentials of the source. If the medical profession in general has that attitude to outside criticism, then yes, it seems likely that they would not be using the optimum statistical measures.