But doesn't this approach make it nigh impossible to catch people that don't fit the mold? It almost seems like it becomes a tautology, by not capturing the diseases in people you don't think have them, and only looking in people that you do think have them. And the only way to change the mold is for enough people to likely die that you reevaluate the model.
I am not a doctor, but I cannot fathom why a doctor would ever advocate against more data. Even if that doctor chooses to ignore it, at least let the patient utilize it as he sees fit. As someone who has regularly had to argue with doctors to get a test done, it is incredibly frustrating.
Well, that’s part of why there’s constant research into what risk factors / predictive factors are. These aren’t mutually exclusive activities.
“Data” implies something neutral. Nothing about lab tests is neutral. Without contextual information and studies on how to -interpret- a finding, it’s just potentially terrifying noise. And terror usually results in action. Poorly informed action is often harmful. So... we are trying to prevent harm to our patients.
We are earnestly working on all fronts to build the studies to better interpret these findings. But we just aren’t there yet.
For a rational agent the expected value of information is never negative. (Of course it can be less than the cost of the test.) If a system is treating it as negative, that's a problem to fix.
I'm reminded of a recent visit to a doctor who refused to say anything quantitative about a test. Sure, your average client knows little decision theory, but this doctor either didn't either or pretended not to follow. To make a good decision you need both probabilities and utilities; the doctor presumably best knows the former while the client best knows and cares about the latter, so they need to be able to communicate this way.
Cool, so your serum zorblaxian levels are 300 ng/L. The finding was incidental, so it’s jot tied to existing symptoms / clinical suspicion. It’s a molecule involved in the inflammatory cascade. We don’t know what diseases it’s associated with or not, nor what the change in probability of having the diseases it could be associated with are, not change in prognosis. But, you have a number which your local lab pegs at 1 standard deviation above the mean for the sample they calibrated their measurement technique on.
So, should we now do a work-up for every disease known to man with an inflammatory component? If you have one, if, every test - with their attendant complications and costs - for every disease you don’t have will have net negative effect on your health. You baseline have no symptoms or clinical suspicion for any of these (again, tautology or incidental finding), so the baseline probability is you actually have no disease at all, and this is a spurious value. So, what’s the advantage of this incidental finding?
Framing it as a choice between "have a cookbook decision tree that's been validated by RCTs in exactly this context" and "do a lot of costly (in resources and side effects) followup tests to show we're doing everything we can" is an example of the irrationality that needs fixing. People are capable of actually reasoning with uncertainty when they care about outcomes. When it's you, personally, facing a decision in your life, you don't throw up your hands and say "It's too complicated to decide! There are no actuarial tables about exactly this situation!" Having to decide means having to make a bet. You can be a better or a worse gambler, but we're all gamblers. Medicine as a profession seems to want to pretend they're not, rather like "investors" pretend to be qualitatively different from "speculators".
It’s not “cookbook decision tree”, it’s “data from which to make meaningful inferences about this finding.” It would be nice if we always have that; with incidental findings, we often don’t.
Put it this way: my doctor ordered a test for me, got the results, and made a recommendation. I was ignorant of both the costs (of all sorts) beforehand and of the product of probability and utility on which he (implicitly and unavoidably unless irrationally) based the decision. He did make a meaningful inference from this data, as anyone must to make a decision; he just refused to help me, as a consultant instead of a master, to make my own.
> Cool, so your serum zorblaxian levels are 300 ng/L. The finding was incidental, so it’s jot tied to existing symptoms / clinical suspicion. It’s a molecule involved in the inflammatory cascade.
This isn't that hard. You look at it again in 6 months. And you ask "Did it go up? Did it go down? Is this just your baseline?" And you keep your eyes open a little more strongly for issues that might correlate.
And, you know what, I, the patient, am FAR more invested in keeping tabs on whether things are going right or wrong in my own body than any doctor.
> And, you know what, I, the patient, am FAR more invested in keeping tabs on whether things are going right or wrong in my own body than any doctor.
As someone who has had to figure out his health on his own after multiple doctors have failed or many never even attempted to help, I cannot agree with this more.
"Of course it can be less than the cost of the test." includes non-monetary costs. I thought this was clear since the benefit we're talking about is obviously not wholly monetary. (Or even mostly.)
I suppose you're right that the expected value of information, not including the acquisition costs, is never negative.
That feels like cheating though--this information isn't free.
My argument is that the expected value of this information, including acquisition costs, can very easily be negative, since you can't just "ignore" being dead. These costs are extremely large in my snarky autopsy suggestion, but dominate for whole-body MRIs + followups too. The catch is that the scan doesn't have huge benefits and the costs of the followups, while non-trivial, aren't exactly open-heart surgery either, so both are in the neighborhood of zero, except for the monetary expense, which will be crazy high.
Yup, agreed! In fact I wouldn't have done the test mentioned above if I'd been better informed (an MRI with gadolinium contrast; I was only told the latter bit the day I showed up for it. There seems to be a bit of controversy about possible low-level health effects of this contrast agent, and considering I was very probably fine a priori, I think it was a mistake to do this. But it's kind of another thing to decide this on the spot after making an appointment.)
(To say nothing of the extra $1000 surprise bill...)
Anyway, what is raising my hackles doesn't seem rational in the same way. It seems to have the flavor of "when we look and see certain signs, our system too often takes stupid harmful costly actions on that information. Therefore, don't look!" It'd be reassuring to continue "This is a workaround to an admittedly irrational system while we have the following people researching better ways to improve." But that's not the sort of thing I've read.
My hunch from reading these comments is that the doctors tend to be experienced and pragmatic, and their experience teaches them what tests matter and when. The problem with this though is they start to believe those tests are useless when they might not be.
It becomes more complicated when you consider doctors have limited time and resources and cannot spend too much time with any single patient, at least the way the US healthcare system is designed. I believe this is a large contributing factor for why they behave the way they do - it's how the system is designed. And it's why I have all but given up on doctors even trying to help me with my bizarre symptoms, where I have been accused of lying multiple times.
Sympathies -- I've had some experience of that sort too. I'd have more sympathy with the doctors' side (it is a very difficult position) if they weren't set up as gatekeepers.
I understand it is not a black-and-white situation, and that data is obviously not perfect. And I certainly am not advocating that doctors should order a gamut of tests unnecessarily. But if there is any doubt, how likely is the added data to harm more than help? And I firmly believe the doctor should not fight the patient on a test unless there is a strong reason not to (apart from an obvious medical disorder such as Munchausen syndrome it is hard for me to think of a reason).
If the added data is noise, it can easily be ignored. If it's not noise, then you're lucky to have it. And figuring out whether or not it is noise is important, but you can't do that without the data.
But again, I am not a doctor, and how much bandwidth they have and where their priorities lie is not something I intimately understand. However something that has become blindingly obvious to me is that most doctors do not have a firm grasp of statistics, and will advocate for new drugs when the actual test results are borderline statistically insignificant and easily explained away by confounding factors (most obviously the placebo effect). Sadly not all trials are double-blind, something else I do not understand.
You can’t ignore the noise because we don’t know it’s noise. I know I’m failing to get that idea across, but I honestly don’t know how to articulate it better than I have been. I can tell you’re honestly trying to understand, and I feel the blame is likely on me as a communicator.
Possibly what I’ve failed to communicate is this:
MRI, CXR, etc are not images of the body. It’s not like getting a photograph of a liver and saying, at least we know this is or isn’t going on in the liver. They’re indirect measures of certain attributes of the body, such as tissue density, which we use - coupled with their medical information, and the mechanisms of likely diseases - to infer what’s happening. That’s why reading radiology is a medical specialty, and not something anyone with an anatomy background can do. (There’s a radiologist currently browsing the thread - he’s very welcome to correct me if I’m wrong about what radiology “is”.)
Because of this, every such finding has to be interpreted in a context, and studies tell us how.
Completely out of context findings aren’t a big problem if they’re completely unambiguous: hey, that bone is in two pieces and it should be in one.
What about the finding that isn’t, though? This is equivalent to not having any information on a test’s false positive / false negative rate, only now it’s open-ended to “every condition that could look like that thing” because the defining characteristic of an incidental finding is that it’s -incidental-. It’s not related to any symptoms. So what do I do with “every disease or non-disease process that could potentially look like a spot on the lung, without any accompanying symptoms of that disease”?
What I believe is the responsible answer is: “if I think the pre-test probability isn’t borderline zero, AND the post-test probability would change my course of treatment or diagnosis, order the test. If the pre-test probability is so low that any positive test result would be very likely to be a false positive and thus force me to act in a manner harmful to the patient, don’t order it - it shouldn’t be allowed to change the course of treatment. If the pre-test probability is already so high that any negative result is likely a false negative, don’t order it - it shouldn’t be allowed to change the course of treatment. Only order tests whose results should impact the course of diagnostics or treatment.” What do I do with findings that haven’t been studied in a given context, so I have no clue what their impact on the post-test probability of a diagnosis is? I don’t know. But “test just in case” isn’t the zero-risk option. There aren’t any zero consequence options.
Not every test is an RCT because grant funding agencies don’t provide the budget, plus or minus, many RCTs we’d like to do are unethical (if you have good reason to believe one course of therapy is superior to another, you don’t have the clinical uncertainty to ethically allow randomizing people into an inferior therapy), plus or minus many sub-populations are just too uncommon to build an RCT on without a gigantic budget that facilitates long collection periods across multiple institutions.
I am not a doctor, but I cannot fathom why a doctor would ever advocate against more data. Even if that doctor chooses to ignore it, at least let the patient utilize it as he sees fit. As someone who has regularly had to argue with doctors to get a test done, it is incredibly frustrating.