Doctors have bias. Doctors sometimes get apathetic. Not because they have bad intentions, in fact I bet most doctors really care and really try. But because they're overworked, and they have to care for many patients who they don't personally know, and some of whom get very sick. And because they're only human.
In fact, I believe the way most doctor's appointments work is that the doctor gets a list of symptoms, then matches it against a checklist of what they were taught / experience to form a diagnosis. Very similar to an AI algorithm. Of course this isn't as easy as it seems, you can't just Google the symptoms yourself. Doctors' knowledge and experience is very deep, and takes many years to develop. But still, it's not something an AI couldn't eventually automate.
Right now I would be skeptical of an AI doctor, just because I don't think current AI has that capability. But seriously, if ML gets better I would trust it better than any real doctor.
Doctors have an extremely hard time dealing with complex disorders or even diagnosing them. People go years without a proper diagnosis.
I would love for doctors to just take all the information during first visit and then use some kind of search engine/AI that would go and analyze current literature, new research and suggest things to rule out in a cost effective manner while ruling out things that would need immediate diagnosis/treatment such as cancers.
Patients also need to be tracked if they agree to it, especially if they have a serious disabling condition. Their sleeping habits, diet and everything they do needs to reviewed in detail but just isn't done. Most of these things are self reported and that can be problematic.
Slightly related but there's a website somewhere that you can upload your DNA sequence (in a .txt file!) to and it shows you relevant scientific literature. About 50% of those articles for me told me that I was probably going to develop male pattern baldness in my thirties/forties.
Basically, pharmaceutical marketing is causing doctors to be really misinformed and a lot of doctors lack knowledge about updates from the FDA on certain drugs. They often prescribe medication which are more dangerous than others.
I am thinking about creating my own little tool to research drugs before taking them.
It also utilizes DrugBank: https://DrugBank.com
There are a few exceptions like BRCA where the risk from certain genotypes is high enough to justify aggressive preventative care.
There have been clinical decision support systems around for years which can recommend a diagnosis given complete and accurate clinical data. The real problem is gathering that data, and putting it in a form that the algorithm can use. An AI algorithm can't really gather a detailed patient history or make subjective observations about the patient's state. Those are steps which can't really be automated without strong AGI. In theory a doctor could do all the necessary data entry but in practice it would usually be a waste of time and no one wants to pay for it.
Chat bots can handle data collection, “subjective” patient state can be collected through image recognition and tone analysis. Not sure you need AGI to accomplish either of these tasks.
I keep hearing this whining about overwork, and yet when the subject of residency quota set by the AMA comes up or delegating duties to nurses with almost the same amount of schooling suddenly you can hear crickets in the room.
> In fact, I believe the way most doctor's appointments work is that the doctor gets a list of symptoms, then matches it against a checklist of what they were taught / experience to form a diagnosis. Very similar to an AI algorithm. Of course this isn't as easy as it seems, you can't just Google the symptoms yourself.
Yes you can, and should. Remember half doctors out there are below average...
Every human has a different perspective and approaches problems differently. The old adage "two heads are better than one" comes to mind. With AI there is the possibility that just "one head" will make millions of decisions using a single algorithm. Think about that. No team cooperation. No learning from experience. No brain storming.
Who's second guessing the AI? Where is the alternate perspective? Where can you go to get a second opinion when the computer has replaced humans? Imagine being misdiagnosed by a computer and the clueless human health care workers don't or won't question it because the computer said so. You'll be one the phone talking to an AI which is looking at records generated by AI full of data compiled by AI. Why pay costly humans when you can replace them with a computer.
All of this is going to dehumanize a very human industry (along with everything else.)
Eventually they had to go to the nearest big city (city they're in now is 60K people; city they went to has 900K people). The doctors there figured it out pretty quickly. And they also did not have good things to say about the local doctors. (paraphrase) "I'm not surprised they didn't find anything. They look for the simple solution they know and if that doesn't work they give up."
Additionally, I went to a doctor for a problem in the same big city and he read (and printed out) a webmd article for me.
Anyway. So, today people are stuck with human doctors who have no idea how to fix anything but the simple obvious problems. Personally, I wouldn't mind replacing those people with an AI as I'm not sure it's going to make any difference. There's always going to be specialists and if the AI doctor fails you then you can go to the specialist. But just like today, you're going to have to be your own medical researcher / advocate if the service you get doesn't actually fix your problem.
I personally have 2 rare diseases affecting my peripheral nervous system, and one of them is very rare. The very rare disease is believed to have also caused my type 1 diabetes.
The wild part? The very rare disease, which I had prior to my type 1 diabetes diagnosis, which was also undiagnosed at the time, was blamed as “type 1 diabetes related complications” specifically autonomic neuropathy by human doctors.
Do you think AI would get this right? The answer is an obvious no.
Also, I am in my early 30s. My endocrinologist was surprised that I was 30 and was making a big deal about it (as in “you survived and you shouldn’t be alive” sense). It was never statistically likely that I would be alive, which is why AI should never be used.
So do you think AI should be making health decisions in a healthcare system that already has a ton of human error, and is trained on those errors?
I mean, the third leading cause of death in the US is believed to be preventable medical errors: https://www.bmj.com/content/353/bmj.i2139
You're saying that humans are unacceptably bad. Then why is it a problem to replace them with also unacceptably bad AI. Like, either way you're not getting the treatment you need.
At least if the AI fails you, then nobody is going to get their ego stepped on if you go to get a second opinion. Whereas with a doctor you have to go through a few uncomfortable conversations and then hope that the next guy isn't going to ignore you because they don't want to sour their relationship with your first doctor.
Today? Sure. But really one of the most plausible ways in which AI could be more effective than human clinicians is in rare diseases. It assumes a massive input of data of course, but clinicians are biased against this in ways that are hard to do anything effective against. This is why so many people with rare conditions spend years before getting a proper diagnosis.
Who cares if it turns out that we can't automate the formation of the AI. But we're doing the same work to continually educate and improve doctors already. At least with the AI it won't die of old age once it starts to get really good at what it's doing.
And maybe it turns out that the best that AI can do is duplicate a middle level doctor who isn't really interested in self improvement. If that's the case then at least everyone can get cheap and fast health care that's the same frustrating quality that we're stuck with now. Only with the AI you can say that it's dumb and everyone will believe you. With a doctor you have to fight past "impugning the reputation of the profession" in order to get a real diagnosis.
Diagnosis isn't particularly difficult or time consuming in most routine cases after gathering the necessary data. If you see a jagged line on the x-ray then the patient has a fractured bone. If the A1c level is 7.5% then the patient has diabetes. Etc. Improving diagnostic accuracy will help in rare cases and is worth doing, but that won't have much impact on the health care system as a whole.
You at first appear to say that gathering data and automating things is hard and might not be possible.
Then here you're saying that checklists, evidence-based guidelines, and improved interoperability between systems are all the source of doctor improvements.
These are all arguably forms of AI that have been created through data gathering. Yes, a checklist is a very boring AI, but it's taking some knowledge out of the heads of a group of people and then making it available for someone else to use. The missing step seems to be that instead of having a person look at a piece of paper and execute the checklist and evidenced based treatment, having a computer do it.
Now we're talking about cost. Cost where (at least in the USA) going to the ER for a broken bone can cost thousands of dollars. Cost where (at least in the USA) a birth (something people from all walks of life go through) has a minimum of the patient spending $10K and regularly gets into the $50K and up range (not even talking about ultrasounds etc leading into the birth).
I cannot believe that we're really talking about data collection being too expensive for the medical industry. Maybe nobody wants to be the person to pick up the bill. But it's going to be a rounding error of a rounding error compared to all the other money that goes through that industry.
As for diagnosing rare or very rare diseases, yes, it is possible, as you said. However, if you search for the rare disease I have in PUBMED, there are only 111 journal articles that have been written about it, and some are in foreign languages. The information on it is very sparse. I also doubt that it would be diagnosed by AI, as it effectively looks like "diabetes complications" (I have type 1 diabetes) and even though I had autonomic neuropathy at age 5, the onset of symptoms was very insidious.
This is how it has been done for decades at this point (at least in some systems), and yes it is expensive. To really tackle things like above rare disease interactions, it would require at least an order of magnitude change, this is at least plausible; but would require some systemic changes.
With that said, I think you've highlighted one possible outcome, but it makes a few assumptions that might play out differently depending on how the tech is implemented.
Imagine, for sake of argument, that the AI is just "part of the team". It doesn't replace a doctor, it doesn't have authority over doctors, it's just another signal. If applied in this way, then you can still:
- Ask for a second opinion
- Go to a doctor/team that doesn't rely on AI
- Consult a different / (better?) AI
I realize you're focusing on an outcome where AI replaces humans in the field, but I'd hope humanity can find an implementation that can both leverage the benefits of the tech while not throwing out the core disciplines of medicine in the process. It's reasonable to be pessimistic about this given the tech-run-amok issues prevalent in 2021, but I still have some hope.
Also imagine the possibilities of AIs sharing knowledge with each other. This has its own semi-terrifying implications, but I don't necessarily think we should conclude that the only outcome is a single AI to rule all AIs (and doctors).
Of course, I could be wrong as well. But I don't think all roads lead to doom.
People always get scared that automation will completely remove the human touch from a process. They forget that people will still design, understand, troubleshoot, maintain, improve, be inspired by, learn from, and innovatively transform their new automated devices and processes.
The future has always ended up being people and machines working together, and I think it will continue to be that way for a long time. There will not be an AGI that replaces human thinking in our lifetime.
If you follow a happy path in such a system, everything is fine. But if you don't, it gets Kafkaesque. Check out a first hand account  for flavor.
If we want better social and political decisions, we have to engage in society and politics. The AI will follow what we decide, as it always does.
Let the AI prescribe all of the horse treatments. The doctor can then spend their time imagining more fanciful diagnosis that we're centuries away from teaching the AI how to deal with. Then when the primary treatment fails, instead of trying to convince an overworked and checked out doctor that you've got some other problem, they'll actually already have a list of possible alternative issues that they were really hoping they would get to investigate this time around.
The model leaves a sort of precense on their consciousness, an artifact,, and allows them to reason in ways aided by those artifacts. When you stop creating and using those artifacts and depend on technology to act in place of them, you lose some creative aspects you would normally be able to connect or utilize in other mental models.
For some things I think it's good for technology to act as black boxes we don't have to understand, just know that we can understand it if we need to. Other things are good to understand even if you can automate it. How you choose and differentiate which of those is an incredible challenge in my mind. What do I commit to memory and integrate into my mental model of the world and what do I just reference and use when I need it.
Here are some examples of feedback:
- The AI misses a cancer that gets worse and is later detected on a blood test.
- The AI detects a cancer on an x-ray but a (more accurate) CT or biopsy indicates that there is no cancer
These kinds of feedback loops should allow the clinical performance of AI to be measured and hopefully improved over time.
Has already been happening.
I wrote the essay linked below  a few months ago. It is very relevant here. I argue that asking ML for explanations forces you to get a dumbed down version of the result, just like asking any expert to explain all the subtlety of what they are doing. Asking for explanations is a kind of micromanagement. There are instances where explanations are important (like research), but much less so in model deployment.
The better way is to focus on the results the models provide, and confidence that the model is making supported decisions (i.e. is not extrapolating or predicting on out of distribution data). This is how we would use other kinds of experts - validate their expertise and trust them when they are working in their area.
Why should that be different for ML models? I would expect to be able to switch between a result based (intuition) and a more thorough explaining mode (XAI) to assess soundness of reasoning. And then I am also fully aware that complexity is increased when I turn on explanations.
There are different modes of reasoning. When I am to assess to quality of other peoples reasoning, I do not necessarily need to be an expert or have them think like I do.
Think semantics. You have declarative an operational semantics. It is perfectly alright to have multiple implementation of the same type deceleration, multiple proofs of the same proposition. It is not not really the proof that is of interest, but what modus of reasoning was used to arrive there.
I declare the role I need in a hiring position. And the person being interviewed tells me how she inhabits that role. I assess that is actually constitutes and inhabation.
Though it appears that we fundamentally agree as I read from you last sentence, that the "removal" of the explainable parts are only a last step before deployment.
> But for "managing" the model ...
The problem then becomes taking a good history, using imaging and tests effectively, and charting all relevant observations. AI and IT can help here too. With the same CYA motivation…
The real problem today seems to be that docs need amanuensis-es. Entering the medical data even in electronic records systems seems to be a real challenge to docs. No data…not much an AI can do.
I wouldn't describe this as a drawback of requiring explainability. I'd describe this as drawbacks of today's models and today's explainability methods. Today's explainability tools are pretty nascent and our usual deep learning models know nothing about causation, which handicaps any "why" question. But I'm still pretty strongly in favor of requiring an interrogable answer to "why did you diagnose that," even if that isn't something we really know how to do right yet.
The challenge seems to be that we know we want "why" answers, but we don't know how to describe/codify what constitutes an acceptable "why" answer.
I agree there is more work to be done and making good mathematical decisions when designing the layers seems like a promising way to go.