
A.I. Shows Promise as a Physician Assistant - rafaelc
https://www.nytimes.com/2019/02/11/health/artificial-intelligence-medical-diagnosis.html
======
arkades
1) “The equipment is never the same. You have to make sure the data is
anonymized. Even if you get permission, it is a massive amount of work." This
is literally true, but also, not something I would want to change. Our PHI
should be anonymized, and we should have the right to grant or deny permission
for how our most delicate personal data is used.

2) "When tested on unlabeled data, the software could rival the performance of
experienced physicians. It was more than 90 percent accurate at diagnosing
asthma; the accuracy of physicians in the study ranged from 80 to 94 percent."

And this is where "physician support tools" have always fallen down, and was
the give-away I was expecting in the article. There are two type of ways of
assisting diagnosis: (a) given this mass of data, what does the patient have?
and (b) given this mass of data, does this patient likely have asthma?

(B) is relatively easy. This is not something physicians need help with. It's
also what people keep building physician support tools to do. Because it's
easy. Note the article gives away that they seem to be working on (b),
although the hype machine - as always - implicitly suggests they're working on
(a).

I want (a). I was involved with a couple of patients recently that had their
amyloid cardiomyopathy missed. I decided to refresh myself on the topic. The
first sentence in the relevant text was "The key to diagnosing amyloid
cardiomyopathy is remembering that it exists." (a) could have helped those
patients. (b) could not - if their original physicians - any of them - had
"remember[ed] it exists" he would not have needed AI to help diagnose it. It
would have been trivial. Those docs didn't even make a mistake, per se - it's
such a stupidly rare condition and, in those patients, manifested so subtly,
that connecting the dots was reasonably unreasonable. That's where AI would be
helpful. Not "does this kid have (obvious and common and extremely easily
identified condition)?"

~~~
yorwba
If you can answer "given this mass of data, does this patient likely have X?"
for any X, then (a) is as easy as iterating through all possible values of X
and returning those where the answer is "yes". Remembering that some rare
disease exists may be hard for humans, but is the easiest part for a computer.
So even though some specific trial might be on (b) where X = "asthma",
everyone working on (b) is also implicitly working on (a).

~~~
aqme28
> Remembering that some rare disease exists may be hard for humans, but is the
> easiest part for a computer.

I don't think that's necessarily true. The rarer a disease, the rarer the data
and therefore the harder it is to train a computer to recognize it.

------
randcraw
> It could be years before deep-learning systems are deployed in emergency
> rooms and clinics.

Yep. Caduceus was an expert system that did this back in 1980.
[https://en.wikipedia.org/wiki/CADUCEUS_(expert_system)](https://en.wikipedia.org/wiki/CADUCEUS_\(expert_system\))

Since then many bayesian and decision tree diagnosis systems were created and
performed comparably as well. But none were ever deployed, largely because
some sort of nurse or PA is still needed to do the hard part -- the physical
exam. Diagnosis using existing data is easy; it's the acquisition of all the
necessary data that's hard.

Until software/AI can replace _all_ that a health pro does, especially the
physical exam, it will remain a niche tool used only in the path lab or the
back offices of insurance companies.

~~~
nilkn
The common argument I've heard is that these systems won't replace nurses/PAs,
but rather family physicians, whose diagnosis and referral skills may become
redundant in the face of a nurse/PA to collect data and a machine learning
system to understand the data and make recommendations in response to it
(e.g., request more data/tests, make a diagnosis, make a referral, etc.).

------
ourmandave
_Many organizations, including Google, are developing and testing systems that
analyze electronic health records in an effort to flag medical conditions such
as osteoporosis, diabetes, hypertension and heart failure._

I used to have to wait a week for lab results to be mailed.

Now I'll have to log in and sync my browser to read them. But only after I
confirm my cell phone number and enter the code in a confirmation email.

Oh, and here's some recommended links based on your family, er browsing
history.

------
Bucephalus355
I feel embarrassed saying this but if we consider Alexa a certain form of “AI”
then the talk I’ve heard about “VoiceOps” would be super duper helpful.

I can’t say this without sounding extremely lazy, but to be able to ask my
computer “what is the IP address of that server I just spun up” and other
simple-intermediate queries would be game changer for SREs and Sysadmins.

~~~
avinium
I’ve actually been talking to a lot of people recently about these kind of
concepts - figuring out what kind of problems people are having that could be
solved with current levels of AI speech/language understanding.

Would I be able to drop you an email or PM with a few questions?

