Hacker News new | past | comments | ask | show | jobs | submit login
Study of 160k men finds no all-cause mortality benefit from prostate screening (nejm.org)
28 points by carbocation on March 19, 2012 | hide | past | favorite | 30 comments



To clarify the Hacker News title, the article is talking specifically about prostate-specific antigen (PSA) screening, which is already known to be of uncertain medical value except in some very specific situations. As the discoverer of PSA said, "I never dreamed that my discovery four decades ago would lead to such a profit-driven public health disaster. The medical community must confront reality and stop the inappropriate use of P.S.A. screening. Doing so would save billions of dollars and rescue millions of men from unnecessary, debilitating treatments." http://www.nytimes.com/2010/03/10/opinion/10Ablin.html

There are other more promising methods of prostate cancer screening in development that will hopefully be able to do better in the future.


From the link you posted:

"last month, the American Cancer Society urged more caution in using the test"

It's a very sad state of affairs when a test that unambiguously offers some statistical utility (speaking as a Bayesian, any test is useful if we do the right things with the results, this is quite literally a provable theorem of decision theory) is potentially problematic because we aren't properly evaluating what to do when we get the results. I can't blame patients here, they're not the experts; it's doctors (and perhaps more fairly, the entire medical research industry) that need to wise the fuck up and realize that they're acting as statistical dilettantes in a field where they're putting patients lives at risk because they don't understand math very well.

Yes, even a test that only detects ~3% of cancers should be useful; but this requires that doctors completely understand what it means both to see a positive and a negative result, and don't overreact when they see either one.

If this was a matter of just weighing the costs of having tests done versus the benefits of getting the results, it would be one thing. But it's not. People actually end up suffering and spending great amounts of money just because they did "the right thing" and had tests run, and then listened to the doctors' advice afterwards.

Once more with feeling: after discounting for the cost of the test, there is no test that should be of negative value for a rational agent to take. The results should always be used in a way that, in general, increases the welfare of the people taking the test.

If this is not the case in medicine, then they're doing something seriously wrong mathematically speaking, and this is a very bad thing that should be a high priority to fix.


This is a very difficult area.

Firstly, these consults are never as easy or logical in real life. If you think doctors don't understand Bayesian probabilities then how well do you think patients do?

Decision making when screen positive results occur, with appropriate perspective, is virtually impossible for patients. I've lost count of the procedures I've done for incidental findings found with screening, which we KNOW are benign and will remain benign. Unfortunately, despite very strong reassurance, patients generally want these incidentalomas removed.

Secondly, unless your positive screening test can be dealt with.... a) by a procedure that has zero risk of complications and b) in a way where no stress is engendered while waiting for definitive results and) the costs of (screening + treatment) could not be more effectively spent elsewhere ...then there will ALWAYS be negative sequelae with screening. Therefore these negatives need to be assessed in a real world RCT of sufficient size and with maximal control of bias, to make sure they don't outweigh the positives.

Your perfect world of logical patients balancing pre- and post-test probabilities, and acting accordingly doesn't exist.


Unless taking the test has a cost in and of itself -- such as if it requires surgery, so that the risk of something going wrong during testing might outweigh the information gained if the condition being tested for is rare.


Yup, definitely, I should have made that more clear than it was (I did mention "after discounting for the cost of the test", but it wasn't as obvious as it should have been). Test costs have to be weighed against benefits, but that whole process involves a lot more statistical sophistication than is typically applied in practiced medicine. Researchers are slightly better, though it's fairly common even for them to make hideous statistical mistakes, which is a real problem in the field.

IMO, if pre-med requirements included a bit more statistics and a bit less biology, we'd all be better off...


Ah, I missed that you included that.


Yes, the HN title was only intended to capture what I saw as the most salient result of the study, whose actual title is "Prostate-Cancer Mortality at 11 Years of Follow-up".

I'd be happy to change the title if this one is considered too editorializing.


It looks like this is the 2 year follow up to a previous study.

"Analyses after 2 additional years of follow-up consolidated our previous finding that PSA-based screening significantly reduced mortality from prostate cancer but did not affect all-cause mortality. "

So, if I follow this correctly, your odds of dying from prostate cancer are still definitely lowered by screening, but your odds of dying overall are not. Perhaps this indicates that people who would have gotten prostate cancer are the sort of people (via lifestyle, genetics, diet, etc) who would die of something else anyway.


Prostate cancer is very, very slow to spread in most people. A pathology professor once remarked to me that he had never done an autopsy on a man over 70 who didn't have prostate cancer. Nothing in life is certain except for death, taxes, and prostate cancer ;).


This study basically folds in 2 additional years' worth of data, bringing the total study time to 11 years (or ~ 1.8 million man-years, in a sense).

I think your summary of the impact is a good one re: all-cause mortality.

It is curious, however, that there was a relative reduction in deaths from prostate cancer in the screened group. It turns out that the way randomization was done differed across countries. Detection of what procedures were done differed across groups. Treatment also differed across groups: for those randomized to screening, treatment was at major academic centers, suggesting that they may have had better care. I would think that this would make it more likely to cause a false rejection of the null hypothesis re: all-cause mortality, so I don't think it calls into question that part of the study. But it does trigger some concerns re: study design. A related editorial in the NEJM raises some of the points, too.


This is so frustrating. In medical studies the word "significant" does not mean what it means in normal English. It refers to the possibility that the effect was due to chance, not to how large, important, or clinically significant the effect is.

The study found no significant difference. This does NOT means the study found there was no difference. What it means is that the study was not large enough or the effect was not large enough to prove that the reduction was real at 95% significance.

In fact the screened men died at a LOWER rate. It was just that the effect could possibly have been due to chance.

So realistically the report should say "PSA screening probably saves lives but more study is needed to be sure and to determine how large the effect is".


Somebody correct me if I'm wrong, but here's the way I understand it. One of the major problems with PSA-based screening is false-positives. Getting a false-positive result for cancer could certainly increase your risk of an early death. Cancer treatments put a lot of stress on you physically, and the diagnoses could also cause a lot of mental/emotional stress. Stress of either kind reduces life expectancy.


False positive rates should be understood by the doctors that recommend treatments, though. A doctor should be able to accurately tell a patient what the chance is that they suffer from a disease based on a positive result from a test.

Unfortunately, doctors tend to be terribly bad at this - they are generally quite inept at incorporating prior probabilities into the estimates, and assume that (for instance) if a test has only a 1% false positive rate, then that means a positive result means it's 99% certain that you have whatever it's testing for.

If it's not immediately obvious why that's an idiotically dangerous assessment, then you need to think a bit about a rare disease that only shows up in 1 out of a million people, and then consider the 10,000 people that will test positive for it at a false positive rate of 1%. Don't worry, most doctors don't get this right at first, either, and they will happily suggest to all 10,000 of those people that they seek aggressive, even life-threatening treatment...the difference, of course, is that they're goddamn doctors, entrusted with keeping people alive, and they really ought to know better, whereas you're just some person on the Internet.


A classic systems problem. If something is this important, you don't leave it up to the decision of thousands of inhomogenous physicians. You create a system that ensures they don't even get to choose to get it wrong.


It probably just means prostate cancer represents a tiny portion of mortality, so even a significant difference appears tiny and within the bounds of error. Remember, the finding is not "no difference" but "failed to find a difference". Statistics does not ever prove "no difference".


Actually not even that. The study found a difference but the effect was not large enough or the study was too small to prove that the effect could not possibly have been due to chance.

Time and time again we get people who should know better equating "no significant difference found" with "found that there was no difference".


If you're interested in how our ability to detect cancers (and other diseases like diabetes and osteoporosis) has surged ahead of our ability to know when treatment is effective, I highly recommend Overdiagnosed by Welch, Schwartz, and Woloshin.

Right now medical science seems to be at an uncomfortable phase with certain diseases where we can't be sure our treatments will have a greater benefit than the cost of the treatments— and not just in terms of price, but at the expense of our health itself.

http://www.amazon.com/Overdiagnosed-Making-People-Pursuit-He...


Actually there was a reduction in all-cause mortality. It's just that the study's design was such that the reduction could possibly have been due to chance. This is not at all the same thing as "finding no ... benefit".

This lack of statistical significance is probably mainly due to the small size of the study (160 men).

Note that the reduction in death rates from prostate cancer was large in practical terms and (statistically) significant.

Prostate cancer kills a minority of men. Take this fact, add in the fact the study was small, and then throw in all other causes of death (which vary randomly) then it is not at all surprising that the result was not statistically significant. This is mainly due to a small study and noise from other causes of death.

I for one am going to keep having my PSA tests.


Having said that, the state of Prostate Cancer treatment and the lack of research into Prostate Cancer is a disgrace.

Prostate cancer kills similar numbers to breast cancer yet gets half the research funding. Thank goodness we live in a patriarchal society or the ratio would be even more in women's favor.

I have watched two male relatives die of Prostate Cancer and it is not a good way to go.


160k means 160.000 men. That's a pretty huge sample size. Not statistically significant means we cannot with at least a 95% certainty say that the treatment helps.


For those unfamiliar with the term "all-cause mortality":

http://answers.yahoo.com/question/index?qid=20090813211127AA...


Thank you for phrasing this in terms of all-cause mortality. If only every academic study (and submission) used this as a metric, the world would be a vastly better place.


Well, not quite. All-cause mortality is not the appropriate metric for treatments targeted at people already known to be sick. Where it is appropriate is the case of blanket programs, like cancer screening, intended to be applied to the population in general.


Why isn't all-cause mortality appropriate for people who are already sick?


Well, I had in mind things like, if you're trying to evaluate a flu vaccine or headache medication, you can obviously do much better than a ten year study of all-cause mortality.

But in the context of something like prostate cancer, maybe all-cause mortality is appropriate for people who are already sick, because you are trying to figure out how the risk of death from the disease balances against the risk of death from the treatment, and where the latter may manifest in nonobvious ways, e.g. many people with prostate cancer actually die of heart attacks. Was the stress of cancer treatment a contributing factor? All-cause mortality statistics might help answer that question.


I doubt it. If you relied on all-cause mortality for every study, you'd never get out of phase I trials.


Majority of "drugs" do not improve outcome. They "improve" condition.


Put another way, prostate screening does not improve outcome at all. In fact, given the horrible treatment involved, it degrades life quality while not improving outcome.


"Modern medicine" is in the stone ages due to the profit motive. Cure stops all income while "treatment" keeps the money coming in.


Seems like these results could be unfortunately easily misrepresented or misunderstood by people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: