2) "When tested on unlabeled data, the software could rival the performance of experienced physicians. It was more than 90 percent accurate at diagnosing asthma; the accuracy of physicians in the study ranged from 80 to 94 percent."
And this is where "physician support tools" have always fallen down, and was the give-away I was expecting in the article. There are two type of ways of assisting diagnosis: (a) given this mass of data, what does the patient have? and (b) given this mass of data, does this patient likely have asthma?
(B) is relatively easy. This is not something physicians need help with. It's also what people keep building physician support tools to do. Because it's easy. Note the article gives away that they seem to be working on (b), although the hype machine - as always - implicitly suggests they're working on (a).
I want (a). I was involved with a couple of patients recently that had their amyloid cardiomyopathy missed. I decided to refresh myself on the topic. The first sentence in the relevant text was "The key to diagnosing amyloid cardiomyopathy is remembering that it exists." (a) could have helped those patients. (b) could not - if their original physicians - any of them - had "remember[ed] it exists" he would not have needed AI to help diagnose it. It would have been trivial. Those docs didn't even make a mistake, per se - it's such a stupidly rare condition and, in those patients, manifested so subtly, that connecting the dots was reasonably unreasonable. That's where AI would be helpful. Not "does this kid have (obvious and common and extremely easily identified condition)?"
It seems to me that all these "expert" A.I. systems are predicated on having a wealth of data already available. To me, the more interesting question is (c) "given this scarcity of data, which steps are likely to lead to the shortest time to diagnosis for this patient?"
If the doctor didn't update the software, who should be liable if something bad happens?
If the program has a bug, who should be liable?
Does the doctor have the responsibility to double check the recommendations made by the AI/expert system?
It's much easier to just have a single point of responsibility (the doctor), as it makes the legal questions more simple.
The way it currently stands is even with dilution of authority the doctor is still the final point of liability. It’s a topic of heated anger in the physician community. If you don’t let midlevels work with nearly unlimited autonomy (so the hospital can use them to lax efficiency) you Will be replaced ... but you’re liable for them. The phlebotomy system fails to populate a blood gas order? Your fault for not micromanaging every element in the hospital operation, over which you have basically no authority, because you’re just another line worker (but thanks for carrying the liability.) Dics carry liability for everything as though we are in charge, and we are actually just highly specialized line workers taking orders from above.
It’d be nice if we were only held responsible for our own errors of medical judgment.
I don't think that's necessarily true.
The rarer a disease, the rarer the data and therefore the harder it is to train a computer to recognize it.
(A) focuses on things that are easy. (B) focuses on things that are ambiguous. Their appearance is often a subset of (a).
edit: I wanted to clarify. Asking "is it this common, easily diagnosed (meaning: it has very specific features that can be easily elucidated by testing or physical examination) thing?" does not comprise a set of items that, if iterated, tells you what a patient has. Difficult diagnoses are made when things are rare, sure, but more commonly because they're ambiguous, polymorphic, and/or dynamic. Iterating through lists of (a) does not give you (b) - often, not even through a Holmesian process of elimination.
Turns out medicine is hard?
A tool that will let doctors remember rare diagnoses, is a powerful helper.
Isabel healthcare has such tool.
But clinical decision support tools tend to create resistance among doctors. So I'm not sure Isabel's tool is popular.
Yep. Caduceus was an expert system that did this back in 1980. https://en.wikipedia.org/wiki/CADUCEUS_(expert_system)
Since then many bayesian and decision tree diagnosis systems were created and performed comparably as well. But none were ever deployed, largely because some sort of nurse or PA is still needed to do the hard part -- the physical exam. Diagnosis using existing data is easy; it's the acquisition of all the necessary data that's hard.
Until software/AI can replace all that a health pro does, especially the physical exam, it will remain a niche tool used only in the path lab or the back offices of insurance companies.
I used to have to wait a week for lab results to be mailed.
Now I'll have to log in and sync my browser to read them. But only after I confirm my cell phone number and enter the code in a confirmation email.
Oh, and here's some recommended links based on your family, er browsing history.
I can’t say this without sounding extremely lazy, but to be able to ask my computer “what is the IP address of that server I just spun up” and other simple-intermediate queries would be game changer for SREs and Sysadmins.
Would I be able to drop you an email or PM with a few questions?