This article simultaneously demands that we need to NOT abandon the old system of large scale clinical trials, and not go towards faster means of approving drugs, while at the same time lamenting that CAR-T remains unapproved for certain cancers large scale trials have shown it to not be sufficiently effective. It says that we need to "make trials better" while not proposing any means to do that. What a waste of time.
As Derek Lowe summarized in the accompanying blogpost[0], the main recommendations of this article are:
“better decisions about what data to collect and better ways to collect it.”
Yeah, sometimes people dont get their point across very well, but it is there.
“Most clinical trials try to track every single possible data point about patients, thus inflating costs. But Landry and Horby’s trial could carefully strip down the data needed to the essentials, focusing on information that could easily be collected from medical records. “
A couple sentences later:
“The result was a single study, RECOVERY, that determined more than any other what medicines could help Covid patients.”
As for why we arent’t doing better, just last week there was a HN post that I personally enjoyed:
Yeah... This don't really sound prescriptive. The "single study" that looked at COVID is particularly annoying. It wasn't done in isolation with no knowledge of every other study. Such that there is virtually no way to jump straight to that study.
Specifically for stripping down the data. That was informed by every other study.
Beter decisions about what to collect would, for example be
A higher ratio of 'progress towards definitive answer' per dollar spend.
Perhaps with a smidge of, making people more likely to participate.
Better ways to collect data has two routes for improvement. The boring one is automation. Just chuck smart watches on the subjects and have those track data on a hourly basis rather than monthly doctor visits.
The exciting but scary one is standardized health records keeping that allows tracking outcomes. That also requires a process for analyzing that data on a national scale whilst preserving privacy. The crypto to do that exists, but getting trust will be difficult. There could be other ways of doing analysis without hurting privacy too much. This system could then be used whenever 'real-world-data' is appropriate. Importantly, it could also be used to track data from the randomized medical trials!
Medicine has problems that aren't even spelled right theoretically. EHR portability is just a symptom of that issue.
Clever solutions like MYCIN, a decision tree mapping symptoms and conditions to possible diagnoses, are largely ignored because it is more profitable to maintain a facade and saddle increasingly siloed clinicians with debt.
Actually, there is widespread use of what are called clinical decision support systems (CDSS). But, one of the primary problems with these systems is they generate a large number of false positives and negatives depending on the context.
You still need a physician/nurse/pharmacist to "hand fly the plane". In my own practice, they have been useful to an extent, but they are light years away from automating away clinicians.
Medical education is largely qualitative, overwhelming physicians with basically word associations that approximate a top-5 causes and effects for each body system.
Thankfully, I assume most can look things up effectively.
This becomes the route, and I'd argue from my experience as a patient that treatment pathways in those CDSSes bias towards the simplest condition-treatment decision calculus, so you can largely strike
- in depth history
- change over time
- transient conditions not needing treatment
- drug and non-drug treatment combinations
- side effects, long term effects
- therapy duration
- a clear risk benefit analysis
- novel monitoring and feedback mechanisms
Doctors burdened with tremendously ineffective EMR systems will miss these things even presented on a silver platter, and you report that the CDSSes do the same thing.
Even better - how are they constructed? Voting. In the EU care standards go to a congress of sorts, largely if x then y conditional universal facts are slapped onto a document and supposed to be adhered to. Or in the US (God knows how poisoned these are by "interests") guilds of doctors construct detailed decision trees by committee.
That's before we even jam it into some software. It's tough, I understand, but I'm personally a bit inflamed about the matter.
And what of CV/ML-based radiological interpretation? ... similarly, how is it that psychiatry doesn't identify cures?
I appreciate your frustration as a patient. Myself, I am both a patient and a clinician.
I do think you're missing some things about the way things work "behind the curtain", and I was hoping to offer some insight.
I'm not an expert in medical education, but insofar as it is similar to nursing education, a good one is actually taught from "first principles" of anatomy, physiology, chemistry,physics, etc etc, and you diagnose from there with the assistance of heuristics.
To the extent that medical education is qualitative, that is because human beings experience things in very qualitative ways. You're treating patients and not fixing servers.
Hope this helps. Doing medicine is extremely hard, and automating it is an equally difficult task.
You won't have an argument from me about difficult to use EMR systems.
Diagnostic systems aren’t used because they aren’t useful. Doctors observe-investigate-record; diagnosing a condition from the last step isn’t helpful because in order to get there, a doc has to have already acted on a hunch.
> the systems in question don't improve that situation
I don't think that's true. Look at statistics or ask patients, doctors are very inefficient to detect cancer.
On contrary medical imaging technologies are very efficient. Even a simple phone camera & app can automatically detect things like carcinoma or melanoma, when most general practitioners are not able to recognize them for what they are.
Another statistical tip is that not all doctors are equals. Depending where you live in country side or big town you do not have the same odds of survival. In Europe simply crossing a border would mean that survival could vary by 20% for the same type of cancer.
Eight years ago I designed a simple low cost (~$100) tool to detect heart failure [0] by using resources from Physionet [1].
Physionet competition is motivated by the fact that 1/3 of doctors were shown to be unable to detect heart failure with stethoscope. Competitors show that automatic tools can detect it nearly 100% times.
This does not mean the end of general practitioners, they simply will use other methods, more scientific (no more hunch), and anyway more specialists will be needed to operate those smart tools.
This. They act well as warning systems in case you may have missed something, but they don't solve the issue very well of who to work up in the first place or how to do it.
MYCIN, the rule-based system from the 1980s? It was once touted as "AI", but, like most knowledge based systems, it just takes a set of hand-coded rules and interprets them in a rather simple way.
Most of what you learn in medical school is a set of hand-coded rules. It has a fancier legal name, "standard of care" but that's pretty much what it is.
Only for routine cases. Medicine is full of one of uncertainty, edge cases, and new research.
The difficulty is identifying what’s novel vs routine, think of the first few COVID cases and the 2022 monkey pox outbreak. When exactly do you sound the alarm?
That’s completely true but doctors also miss things, or make errors, that would have been trivially caught by a deterministic rule-based system.
Just look at how successful checklists have been in surgery (or air transport), that’s a very simple rule-based system that has saved countless lives.
Helping doctors to diagnose effectively requires someone to ask them the right questions. All too often the job of asking questions is left to the patient.
Checklists are great for surgery or routine care because the goal is well understood. Diagnosis is more complicated though I agree a deterministic rule-based system can be useful for first responders and emergency rooms. Prescreening questions asked by a nurse and fed into a rule based system might be helpful in general medicine.
The issue is when you have access to someone’s medical history that extremely useful information doesn’t fit well with rule based diagnosis. Similarly a comprehensive and deterministic system would need to include questions for an insane number of edge cases making it difficult to use. Do you really add an option for actually blue skin because of these people: https://en.wikipedia.org/wiki/Blue_Fugates
I could see an AI that that recommends medical tests or diagnosis could be useful, rule based systems just seem overly limited.
From my programmer perspective, the point of automating deterministic rule-based systems is so that the humans involved can/will move on to applying non-deterministic intuition-based reasoning - what I would call "intelligence".
It feels like most doctors I've met informally run the deterministic rules in their head (as they've been doing for N decades), and then mostly consider themselves having done their job. This makes for a nice and easy job for the common cases, but ends up dropping the uncommon cases on the floor.
As I said above, there is widespread use of clinical decision support systems already. They do things like flag patients that the program determines might have sepsis in the hospital.
The issue is, even these relatively simple programs generate large numbers of false positives and negatives.
Now, let's say someone has an uncommon cause of epigrastric pain. Epigastric pain is a very non-specific symptom to begin with. So, the amount of rule based work up you can do on that information alone even for common causes is very small. Nevermind the uncommon causes.
Accurately diagnosing and following up on each patient that walks through your door over the course of an entire 80 year medical history is infinitely more complex than getting a plane from point A to point B.
And, the problem of people giving up after having worked through algorithms isn't necessarily solved by more and more formal algorithms. Airline safety is actually a perfect example of why that is the case. Plenty of cases of pilots getting fixated on what the auto pilot is telling them.
Retrospective real world data and mining from EHR is useful, but it’s not as robust as a prospective randomized clinical trial.
There is no protocol so not all data is captured. You’re also relying on the robustness of the data capture which is no where close to as good as a clinical trial because it’s just clinical practice.
As such, you need to filter and refine data sets, which opens up the green jelly bean problem. Filters become subjective based on “best thinking”.
Which is fine, but guess who looks at the holes in the data and decides they won’t pay for it? Public and private healthcare payers.
I’ve been involved in those negotiations. Non-RCT data is “suspect” and it’s not unusual to hear “go back and conduct another trial so you have reliable data”.
They are great complements, but not a substitute for RCTs.
This matches what the author says no? He argues that clinical trials (i.e. deciding whether to approve a drug) should always be randomized. At the same time he believes there is lots of potential for real-world-data in other steps. Pre-trial simulation, post approval outcome tracking, automated drug-interaction analysis, etc.
The data analysis is incredibly valuable, even if not suited for drug approval decisions.
So why are trials getting more expensive? The crux of the article was triald aren't being done because costs have risen, but why have costs risen? Have they risen faster than drug prices?
Very quick searching seems to indicate increased complexity of trials is a big factor. And trials are more complex because most of the drugs being tested are incremental improvements on existing drugs.
“Drugs are more expensive to test when they have smaller effects that require observing more patients for longer periods of time,” [1] Because you have to observe more people over a longer period of time.
That also tempers the article's enthusiasm for biological technologies. The problem with health technology is that every single step is uncertain, every single step needs to be tested. It's "artisanal" to an extreme extent. There is no equivalent to Moore's law. Each step costs more, not less. Well, until we reach the point we can grow who organs easily. Then thing might change.
There are hints of tracking too many data points, and perhaps too much manual tracking. Also the story that, for very profitable drugs the approach is about speed of trials rather than cost. If that is pervasive then it might become standard. Heck, if that is where the money is, that is where carreers will gravitate to.
Would you also say that for-profit food production systems optimize for profit, not food?
I'd personally say that anyone going hungry in the modern age is a travesty. But to remedy that, I'd leverage existing for-profit food production rather than thinking to overhaul the whole system with less profit motive.
The incentives of the healthcare system are deeply screwed up though. Much of this due to existing regulations, with the rest being momentum and emergent behavior due to those broken incentives. But just to be clear - I'm not arguing in favor of deregulation, but rather sweeping reregulation. For example, there should be no such thing as an "HMO".
I replaced the baity title with a sentence from the article that seems to say what it's about. If someone can find a more representative sentence, we can change it again.