Hacker News new | past | comments | ask | show | jobs | submit login

It looks like this is the 2 year follow up to a previous study.

"Analyses after 2 additional years of follow-up consolidated our previous finding that PSA-based screening significantly reduced mortality from prostate cancer but did not affect all-cause mortality. "

So, if I follow this correctly, your odds of dying from prostate cancer are still definitely lowered by screening, but your odds of dying overall are not. Perhaps this indicates that people who would have gotten prostate cancer are the sort of people (via lifestyle, genetics, diet, etc) who would die of something else anyway.




Prostate cancer is very, very slow to spread in most people. A pathology professor once remarked to me that he had never done an autopsy on a man over 70 who didn't have prostate cancer. Nothing in life is certain except for death, taxes, and prostate cancer ;).


This study basically folds in 2 additional years' worth of data, bringing the total study time to 11 years (or ~ 1.8 million man-years, in a sense).

I think your summary of the impact is a good one re: all-cause mortality.

It is curious, however, that there was a relative reduction in deaths from prostate cancer in the screened group. It turns out that the way randomization was done differed across countries. Detection of what procedures were done differed across groups. Treatment also differed across groups: for those randomized to screening, treatment was at major academic centers, suggesting that they may have had better care. I would think that this would make it more likely to cause a false rejection of the null hypothesis re: all-cause mortality, so I don't think it calls into question that part of the study. But it does trigger some concerns re: study design. A related editorial in the NEJM raises some of the points, too.


This is so frustrating. In medical studies the word "significant" does not mean what it means in normal English. It refers to the possibility that the effect was due to chance, not to how large, important, or clinically significant the effect is.

The study found no significant difference. This does NOT means the study found there was no difference. What it means is that the study was not large enough or the effect was not large enough to prove that the reduction was real at 95% significance.

In fact the screened men died at a LOWER rate. It was just that the effect could possibly have been due to chance.

So realistically the report should say "PSA screening probably saves lives but more study is needed to be sure and to determine how large the effect is".


Somebody correct me if I'm wrong, but here's the way I understand it. One of the major problems with PSA-based screening is false-positives. Getting a false-positive result for cancer could certainly increase your risk of an early death. Cancer treatments put a lot of stress on you physically, and the diagnoses could also cause a lot of mental/emotional stress. Stress of either kind reduces life expectancy.


False positive rates should be understood by the doctors that recommend treatments, though. A doctor should be able to accurately tell a patient what the chance is that they suffer from a disease based on a positive result from a test.

Unfortunately, doctors tend to be terribly bad at this - they are generally quite inept at incorporating prior probabilities into the estimates, and assume that (for instance) if a test has only a 1% false positive rate, then that means a positive result means it's 99% certain that you have whatever it's testing for.

If it's not immediately obvious why that's an idiotically dangerous assessment, then you need to think a bit about a rare disease that only shows up in 1 out of a million people, and then consider the 10,000 people that will test positive for it at a false positive rate of 1%. Don't worry, most doctors don't get this right at first, either, and they will happily suggest to all 10,000 of those people that they seek aggressive, even life-threatening treatment...the difference, of course, is that they're goddamn doctors, entrusted with keeping people alive, and they really ought to know better, whereas you're just some person on the Internet.


A classic systems problem. If something is this important, you don't leave it up to the decision of thousands of inhomogenous physicians. You create a system that ensures they don't even get to choose to get it wrong.


It probably just means prostate cancer represents a tiny portion of mortality, so even a significant difference appears tiny and within the bounds of error. Remember, the finding is not "no difference" but "failed to find a difference". Statistics does not ever prove "no difference".


Actually not even that. The study found a difference but the effect was not large enough or the study was too small to prove that the effect could not possibly have been due to chance.

Time and time again we get people who should know better equating "no significant difference found" with "found that there was no difference".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: