
Challenges for Artificial Intelligence in Medicine - brandonb
https://blog.cardiogr.am/three-challenges-for-artificial-intelligence-in-medicine-dfb9993ae750#.wxs07vqs1
======
brandonb
(OP here)

We spend a lot of time thinking about how to make AI succeed in medicine.
Given that so many efforts, including MYCIN, have been tried and failed
before, one of the key questions to answer is "Why now?" In other words, what
has changed in the world which will let AI succeed where it has failed before?

I'm curious: is anybody else here applying deep learning, or any other
subfield of AI, to healthcare?

If so... do the challenges listed in this post resonate? Do you believe the
shifts identified are the right ones to focus on?

~~~
kevinalexbrown
I've used deep learning for segmenting brain anatomical scans, and I worked in
a lab that used neural networks to detect cancerous tumors.

I suspect the first major hospital-facing implementations of machine learning
will be in radiology, e.g.:
[http://suzukilab.uchicago.edu/](http://suzukilab.uchicago.edu/), which has
been diagnosing cancerous tumors in CT scans with neural networks since before
it was cool (one reason you won't see the terms 'deep learning' in the
literature since they were originally just 3-layer networks, before the term
was even coined). IIRC it outperformed the average radiologist.

I wonder if the label problem could be less difficult for some low-hanging
fruit. The CT scan neural network required something like 40k labeled scans
from a radiologist, but it could come for free: many yes/no disease detections
will eventually be resolved by human labeling anyway by your doctor. If you
had access, say, to every CT scan taken, and electronic health records for the
patients, your labeling is noisy and biased but at least at massive scale. The
problem is (legitimately) restricted access to health records in the US. Maybe
some European countries have better data access?

And the implementation problem will eventually disappear. I remember talking
with a radiologist years ago, who remarked "some people in my field have no
idea it's about to disappear". I'm not so sure there will be no more
radiologists, but their role will definitely change. Hospitals would be okay
with this, actually, since radiologists are expensive. Eventually radiology
scans will probably be like ordering blood tests, where fewer and fewer MD's
are required.

~~~
naveen99
Ct scans aren't really used to look for brain tumors. We use mri for that
mostly. Ct is used for screening of stroke, trauma and other things. Source:
radiologist / me.

I work on radiology image segmentation also. And I agree it is solvable with
machine learning. But even if software could do a job of a radiologist, it
wouldn't replace one any more than your ekg reading program replaced
cardiologists.

~~~
Toenex
> But even if software could do a job of a radiologist, it wouldn't replace
> one any more than your ekg reading program replaced cardiologists.

I don't think the radiologist is going anywhere soon but the role is changing.
Radiologists are increasingly having to deal with more and more derived
information. They need to understand the algorithms being used as well as the
biology, anatomy, physiology and disease being investigated. I can see a time
when algorithmic specialists become a regular part of their multidisciplinary
team.

------
aabajian
I work for a company that does machine learning on clinical notes. The
challenges the author introduces are real, but he misses the mark on the last
point "Only Partially a Problem: Regulation and Fear."

Actually, regulation and fear _are_ the main reasons that machine learning
hasn't taken off in clinical medicine. More precisely, the provider's fear of
getting sued and the regulations that require a licensed practitioner to "have
the final say." There is one more problem as well --> machine learning doesn't
solve a problem that providers _think_ they have. It's lesson #1 from The Lean
Startup or The Startup Owner's Manual. You may have the best EKG-reading
software in the world (I have no doubt computers could surpass providers on
this task), but if the providers don't feel they need it, it simply won't be
adopted. This is the Watson situation at heart.

Conversely, here are some areas in medicine where machine learning _has_ been
adopted:

1\. Medical billing code generation: Several companies have systems for
reading notes using natural language processing and predicting billing codes
using market-basket analysis.

2\. Identifying bacterial cultures: Inpatient bacterial cultures are placed in
a big incubator and constantly scanned for growth. When growth is suspected,
there are emerging algorithms to automatically classify the bacteria. Similar
work is being applied to other areas of pathology (see:
[http://www.nature.com/articles/ncomms12474](http://www.nature.com/articles/ncomms12474))

3\. Image-analysis in radiology: There are a few radiology companies that are
demonstrating superior results by applying novel algorithms. While not
"machine learning" per se, the existence of such algorithms is encouraging for
future advancements in radiology, since it's a step beyond just viewing the
image. Here's one such company that has gained FDA approval for their blood
flow mapping technology:
[http://www.ischemaview.com/](http://www.ischemaview.com/)

------
northern_lights
Many people might not understand just how busy physicians are, and how
difficult it can be to integrate a new product into the clinical workflow.

The most pressing thing to understand is that clinicians spend the _VAST_
majority of their time gathering all of the necessary information to make a
diagnosis. In other words, they aren't puzzling over how to diagnose about 85%
(made that up) of their patients.

Once the necessary information is gathered, an experienced doc doesn't usually
spend more than about 10-15 _seconds_ debating different diagnoses. Therefore,
if your tool takes more than 10-15 seconds to launch, enter any necessary
data, and get a result, you are slowing the clinician down and they won't use
it. This is why automated EKG interpretations (which are very much a real
thing used at hospitals across the country) print directly on the EKG printout
- it doesn't cost the clinician more than about 2 seconds to read what the
machine thinks and adjust their interpretation accordingly[1].

One of the major problems limiting adoption of "expert" computer systems is
the amount of (very expensive) integration it takes to get them under that
10-15 second limit. One of the big reasons radiology is seeing a lot of buzz
around machine learning and automated interpretation is that integration
becomes a lot easier when you can just feed in an image and maybe 5 words
about the indication for the study.

I would love to go on for a while about this stuff, but I'll stop there for
now :)

[1] Some people here might be interested to learn that non-cardiologists
generally don't have negative views about automated EKG interpretations. But
we are also very well-aware that when we make decisions about a patient, those
decisions have to be anchored to something a lot more substantial than "the
machine told me to do it."

~~~
brandonb
One way to think about AI's potential impact is less about replacing what
physicians do well currently, and more about doing things they can't do at
all.

Take ECGs -- it's true that in a hospital, an automated ECG interpretation
doesn't buy you much. But what about about the patient with a paroxysmal heart
rhythm that doesn't show up when they're at the doctor's office?

I was at a patient conference recently, and people were describing the first
time they felt atrial fibrillation (a common abnormal heart rhythm). Many
times, by the time they got to the doctor, they were back in sinus rhythm and
thus the ECG showed no abnormality. Some were told they were just feeling
"anxious" or "going through menopause." It often took months of persistence
just to get a diagnosis.

Now, if have cheap sensors + AI analyzing the patient's whole heart history
before they walk in the door, you can do a lot of good for real people.

~~~
northern_lights
To address your example directly - we already have holter monitors that would
show a case of atrial fibrillation quite easily. They aren't terribly
expensive, at least for something that has to have FDA approval, and they are
frequently used. Heck, you don't even need "AI," in the sense of neural
networks/machine learning/some other buzzword. Current systems will review a
strip collected over several days and flag any abnormal rhythms.

The problem comes with determining who to put on a monitor. In the case of the
patients you described, it's actually quite likely that the doctors seeing
these patients considered the possibility of afib. The symptoms, though, can
be very vague, and they are seen nearly every day in the doctor's office. It's
simply too expensive to put every patient on a holter monitor - the doc's
office has to be paid to maintain the monitors (which people abuse at home),
the nurses have to be paid to teach patients how to correctly wear them, the
monitor company has to be paid for whatever absurdly expensive and proprietary
review software they supply, and the prescribing doctor (oftentimes the
prescribing cardiologist) has to be paid to review and confirm the machine's
interpretation.

All of this for a transient rhythm which any second year medical student would
easily recognize if presented the EKG from across the room.

The sad reality is that the patients you described were experiencing the
system as it is "designed" (I use the term loosely) to work. The fact that
someone is persistently seeking help for their problem dramatically raises the
probability that something is truly wrong, and doctors actually recognize this
and take it into account. This is one of the reasons it's considered best
practice to establish a long term relationship with one doctor who knows you
well, but it's harder and harder to do with insurance companies only
reimbursing for 15 minute visits.

------
dharma1
I'm working with a few people on ML applications for medical image
segmentation, in Finland and south east Asia. I think ML aided diagnosis will
be commonplace pretty soon.

Here in the UK, DeepMind has been doing interesting work on retinal and
radiology images with NHS.

While I agree that large enough quantities of labeled data and legal access to
it can be hard to get, interestingly, there are many more low hanging fruit in
medtech space that don't necessarily have anything to do with machine
learning.

Take hospital IT software for instance. Doctors literally waste double digit
percentage of their time wrestling with really bad legacy software.

Even the really expensive solutions, like Epic Systems, is horrible. I am
hopeful that better options will become available and future public health
budgets don't get wasted on the kind of systems that exists now

~~~
amelius
The interesting part is that a lot of effort is already being made to improve
those systems. I even know a family doctor who was working in his spare time
on improving IT infrastructure.

~~~
Axsuul
Could I connect with that family doctor you know? I love speaking with
technology-minded individuals in healthcare. My contact info is hello@james.hu

Thanks!

------
stewbrew
AI again? Expert systems are around to support medical doctors' decision
making for 2+ decades. Studies demonstrated that doctors can use them to
improve their decisions. Hardly anybody uses them in practice.

In real life, medical information often is stored as PDF or similar in the
hospital information system. An interesting challenge for AI would be to
encode these PDFs.

~~~
ch4s3
Yeah, I build a decision support product that uses an expert system/GOFAI. We
parse PDFs, root around the EHR, read and analyze unstructured data, and so
on. Parsing pdfs isn't that hard, unless you want to get things like EKG
results, then you need to to OCR and and some analysis on the now potentially
garbled text.

We have some pretty active users with great results, but doctors are super
busy. Its hard to get them to use anything that isn't in their standard tool
kit or tie to payments. And, that understandable when you see 14 + patients a
day. Getting into the workflow if the real challenge for AI in my view.

------
vonnik
I'm a bit disappointed in the straw man assumptions in the first paragraph
about AI + cats. There's an _enormous_ amount of work being done applying AI
and Deep learning to healthcare. Enlitic is one example. The MLHC conference
is entirely devoted to the topic. Deepmind's work with the NIH is also well
known.

------
Cromatico
The real disruption is in giving power to the patient not the doctor. I want
that power. I check online resources all the time about every sign and symptom
I get, about every drug and medicine and about all procedures in order to
avoid visits at all cost, only for surgery, only as last resource.

Yes, self-medication is wrong, right now is wrong, and there exactly is the
disruption. Give information to the patients as a first line of defense, then
let doctors handle the special cases.

~~~
sjg007
This is a hard one.. you want a cautious doctor but at the same time you need
someone who will order the test when necessary and is not overworked. The
balance is in self advocating and not crying wolf. That is the problem AI
needs to solve.

