> This is not an abstract problem. Here is one example. For years in A&E, patients with serious head injury were often treated with steroids, in the reasonable belief that this would reduce swelling, and so reduce crushing damage to the brain, inside the fixed-volume box of your skull.
> Researchers wanted to randomise unconscious patients to receive steroids, or no steroids, instantly in A&E, to find out which was best. This was called the CRASH trial, and it was a famously hard fought battle with ethics committees, even though both treatments – steroids, or no steroids – were in widespread, routine use. Finally, when approval was granted, it turned out that steroids were killing patients.
> This was an extraordinary piece of work. At the end of the trial, where the head injuries were pretty bad (a quarter of the people died), it turned out there were two and a half extra deaths for every one hundred people treated with steroids.
It's amazing to me that they wouldn't track this from the beginning, and do the experiment, to vet the idea. I mean, they went all in on it, why not give it to 50% of the head injuries and see?
It's impossible to rationally consider all the myriad decisions we must make in the modern day, so people rely on rules-of-thumb that work in most cases (or we hope they work in most cases). For example, "don't perform medical experiments on people that may cause them more hard than not experimenting". Unfortunately, we often start relying on the rule-of-thumb as the reason instead of the heuristic that helps us short-circuit a rigorous rational reasoning, and then we end up with people using those rules of thumb why to deny, or criticize, actions where it makes no sense.
The highest standard for decision making is definitely not "going with your gut". I wonder if anyone is actually claiming this or you just like to build strawmen.
Typical reasons to adopt a practice is politics and marketing, then legacy, only then followed by effectiveness...
Do any of these studies of "medical reversals" attempt to estimate how often reversals should be made ideally?
Indeed, from the second paragraph (emphasis mine):
> Medical reversals are a subset of low-value medical practices and are defined as practices that have been found, through randomized controlled trials, to be no better than a prior or lesser standard of care (Prasad et al., 2013; Prasad et al., 2011).
That is, the authors assert that something is low value if it is later proven to not work.
Why give any treatment that hasn't been through a randomised trial? (Unless, of course, you are giving it as part of a trial)
The classic tongue-in-cheek example is parachutes:
> Objectives: To determine whether parachutes are effective in preventing major trauma related to gravitational challenge.
> Design: Systematic review of randomised controlled trials.
> Main outcome measure: Death or major trauma, defined as an injury severity score > 15.
> Results: We were unable to identify any randomised controlled trials of parachute intervention.
> Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
It turns out that study design can also be a problem, even with 'gold standard' designs like the RCT...
In days past a lot of medicine was based on what seems to work or what ought to work. Unless the effect is really obviously disastrous it's easy to believe an intervention is helping even if it's not. Medical research was of course done but doctors mostly went with their gut or whatever article they'd recently read in a journal and liked the sound of.
It's only since the late 80's and through the 90's that Evidence Based Medicine really started to get going. Since then doctors have (sometimes slowly) come round to the idea. However all this research takes time. What do you do about those interventions that haven't been tested yet?
You could stop doing anything that hasn't been thoroughly researched, but that risks letting people die or suffering unnecessarily.
Shouldn't the focus be on applying treatments backed by robust evidence in the first place, rather than on medical reversals? Of course the two are connected, but the wording seems odd. Medical reversals seem to be a symptom of the problem, not the root cause.
Of course, I might be misunderstanding the terminology.
edit: I've read the abstract and it seems I am indeed misunderstanding the definition, but for the life of me I cannot understand what "medical reversal" means in layman words.
Medical reversal occurs when a new clinical trial — superior to predecessors by virtue of better controls, design, size, or endpoints — contradicts current clinical practice. In recent years, we have witnessed several instances of medical reversal. Famous examples include the class 1C anti-arrhythmics post-myocardial infarction (contradicted by the CAST trial) or routine stenting for stable coronary disease (contradicted by the COURAGE trial).
The article I link to has one author common with the paper originally posted, way above. Shame that this author seems to have lost the skill of ensuring that terms are propery defined.
I was confused because "reversal" sounds to me as the act of ceasing to do something, i.e. "reverting" the treatment (if you're a programmer: I thought of "reversal" as in "reverting a mistaken commit using git"). I now see it's a technical term which means the opposite!
Now it all makes sense.
I think an important difference is that current clinical practice is not necessarily thought to be "unproven/inconclusive." Rather, I think people think it has a solid foundation, but better investigation reveals that not to be true.
So this is similar to part of the problem with low value or harmful medical practices proliferating. If you are doing x, you probably won't be doing y. Its use actively excludes the use of better therapies in most cases.
But, worse, biological processes are complicated and there can be critical windows of time for x to happen. If people are ignorant of such a window and how to use it effectively, some people will have dramatically better outcomes than others in a way that promotes the all-too-common perception that it's just random. For medical issues, this can be literally life or death.
Furthermore, use of low value procedures pollutes the data with lousy outcomes. If you don't identify that x treatment is the culprit, then the perception that patients with x condition have yadda prognosis proliferates. This actively promotes poor outcomes by encouraging doctors and patients alike to accept a poor outcome as the norm and to be expected for your condition.
Additionally, once a practice proliferates, it tends to persist. It becomes a habit. Habits are hard to break.
And doctors are people. Most people want to do something, anything rather than doing nothing. For a doctor, doing something, anything is probably less likely to get them sued for malpractice than taking a wait-and-see approach, even if waiting is the wiser move. It's going to be harder to defend the choice to do nothing if it goes to court. It flies in the face of how the human mind works.
It takes substantial education, wisdom and self restraint to do nothing when the problem is your responsibility to fix. Even if you know that's currently the best course of action, it is all too easy to cave in the face of social pressure, especially if you have reason to believe that not going along to get along may come with substantial penalties (like a malpractice lawsuit).
To my mind, the following linked article is related to that last point, but I also wrote it and I've had four hours of sleep. Apologies if it seems unrelated:
Culture could be changed from the expectation that most encounters with a doctor result in an action. The problem is visiting a doctor costs hundreds of dollars for a short amount of time. It doesn't matter if you have insurance or free universal health care, it still costs hundreds of dollars regardless of the layers of abstraction you put on top of the billing.
I personally wish encounters with doctors resulted in more tests or other data gathering (and hopefully that data made available de-identified for analysis)
But a randomized, controlled trial can produce meaningful results only where just one malady is being treated. The DSM is full of diagnoses that lump together a whole family of pathologies with (sometimes only superficially) similar symptoms, but entirely different causes. This is especially notable in psychiatry, but far from unique; for an extreme example, cancers.
The reason such trials produce bad results is that there is no way to know which patients have the particular pathology whose cause is addressed by the treatment under test, without actually administering it to see.
Actually performing such a trial, with an effective treatment agent, tends to produce strong results for a few patients, and null or actually harmful results in the rest. Nothing is wrong with the treatment, when applied to the patients who should get it, but the trial fails to produce a positive result.
Confusing a bad trial with a bad treatment should be an error made only by ignorant observers, but it is all too commonly seen in apparently respectable media.
Do you have any actual data to support this assertion, or is it just a weak sophism, same as used to support ineffective "integrative medicine" practices?
Insisting on data that demonstrates invalid RCTs while assuming that DSM diagnoses precisely distinguish causes, on the basis of no data at all, puts the cart before the horse.