Hacker News new | comments | ask | show | jobs | submit login
A.I. Shows Promise as a Physician Assistant (nytimes.com)
72 points by rafaelc 7 days ago | hide | past | web | favorite | 14 comments





1) “The equipment is never the same. You have to make sure the data is anonymized. Even if you get permission, it is a massive amount of work." This is literally true, but also, not something I would want to change. Our PHI should be anonymized, and we should have the right to grant or deny permission for how our most delicate personal data is used.

2) "When tested on unlabeled data, the software could rival the performance of experienced physicians. It was more than 90 percent accurate at diagnosing asthma; the accuracy of physicians in the study ranged from 80 to 94 percent."

And this is where "physician support tools" have always fallen down, and was the give-away I was expecting in the article. There are two type of ways of assisting diagnosis: (a) given this mass of data, what does the patient have? and (b) given this mass of data, does this patient likely have asthma?

(B) is relatively easy. This is not something physicians need help with. It's also what people keep building physician support tools to do. Because it's easy. Note the article gives away that they seem to be working on (b), although the hype machine - as always - implicitly suggests they're working on (a).

I want (a). I was involved with a couple of patients recently that had their amyloid cardiomyopathy missed. I decided to refresh myself on the topic. The first sentence in the relevant text was "The key to diagnosing amyloid cardiomyopathy is remembering that it exists." (a) could have helped those patients. (b) could not - if their original physicians - any of them - had "remember[ed] it exists" he would not have needed AI to help diagnose it. It would have been trivial. Those docs didn't even make a mistake, per se - it's such a stupidly rare condition and, in those patients, manifested so subtly, that connecting the dots was reasonably unreasonable. That's where AI would be helpful. Not "does this kid have (obvious and common and extremely easily identified condition)?"


And this is where "physician support tools" have always fallen down, and was the give-away I was expecting in the article. There are two type of ways of assisting diagnosis: (a) given this mass of data, what does the patient have? and (b) given this mass of data, does this patient likely have asthma?

It seems to me that all these "expert" A.I. systems are predicated on having a wealth of data already available. To me, the more interesting question is (c) "given this scarcity of data, which steps are likely to lead to the shortest time to diagnosis for this patient?"


We've had expert systems since the 80s that were better at experts at some tasks (the paper I read was about antibiotic choice and dose). The problem is never with their effectiveness, it is all the issues around who is responsible for the patient.

If the doctor didn't update the software, who should be liable if something bad happens?

If the program has a bug, who should be liable?

Does the doctor have the responsibility to double check the recommendations made by the AI/expert system?

etc etc

It's much easier to just have a single point of responsibility (the doctor), as it makes the legal questions more simple.


> It's much easier to just have a single point of responsibility (the doctor), as it makes the legal questions more simple.

The way it currently stands is even with dilution of authority the doctor is still the final point of liability. It’s a topic of heated anger in the physician community. If you don’t let midlevels work with nearly unlimited autonomy (so the hospital can use them to lax efficiency) you Will be replaced ... but you’re liable for them. The phlebotomy system fails to populate a blood gas order? Your fault for not micromanaging every element in the hospital operation, over which you have basically no authority, because you’re just another line worker (but thanks for carrying the liability.) Dics carry liability for everything as though we are in charge, and we are actually just highly specialized line workers taking orders from above.

It’d be nice if we were only held responsible for our own errors of medical judgment.


If you can answer "given this mass of data, does this patient likely have X?" for any X, then (a) is as easy as iterating through all possible values of X and returning those where the answer is "yes". Remembering that some rare disease exists may be hard for humans, but is the easiest part for a computer. So even though some specific trial might be on (b) where X = "asthma", everyone working on (b) is also implicitly working on (a).

> Remembering that some rare disease exists may be hard for humans, but is the easiest part for a computer.

I don't think that's necessarily true. The rarer a disease, the rarer the data and therefore the harder it is to train a computer to recognize it.


No, they're not.

(A) focuses on things that are easy. (B) focuses on things that are ambiguous. Their appearance is often a subset of (a).

edit: I wanted to clarify. Asking "is it this common, easily diagnosed (meaning: it has very specific features that can be easily elucidated by testing or physical examination) thing?" does not comprise a set of items that, if iterated, tells you what a patient has. Difficult diagnoses are made when things are rare, sure, but more commonly because they're ambiguous, polymorphic, and/or dynamic. Iterating through lists of (a) does not give you (b) - often, not even through a Holmesian process of elimination.

Turns out medicine is hard?


Another problem I can see with such systems is that it will enforce the statu quo. But the best doctors I met were the one taking the risk to give me controversial advices, and options to choose from, including the well known ones.

I agree with you.

A tool that will let doctors remember rare diagnoses, is a powerful helper.

Isabel healthcare has such tool.

But clinical decision support tools tend to create resistance among doctors. So I'm not sure Isabel's tool is popular.


> It could be years before deep-learning systems are deployed in emergency rooms and clinics.

Yep. Caduceus was an expert system that did this back in 1980. https://en.wikipedia.org/wiki/CADUCEUS_(expert_system)

Since then many bayesian and decision tree diagnosis systems were created and performed comparably as well. But none were ever deployed, largely because some sort of nurse or PA is still needed to do the hard part -- the physical exam. Diagnosis using existing data is easy; it's the acquisition of all the necessary data that's hard.

Until software/AI can replace all that a health pro does, especially the physical exam, it will remain a niche tool used only in the path lab or the back offices of insurance companies.


The common argument I've heard is that these systems won't replace nurses/PAs, but rather family physicians, whose diagnosis and referral skills may become redundant in the face of a nurse/PA to collect data and a machine learning system to understand the data and make recommendations in response to it (e.g., request more data/tests, make a diagnosis, make a referral, etc.).

Many organizations, including Google, are developing and testing systems that analyze electronic health records in an effort to flag medical conditions such as osteoporosis, diabetes, hypertension and heart failure.

I used to have to wait a week for lab results to be mailed.

Now I'll have to log in and sync my browser to read them. But only after I confirm my cell phone number and enter the code in a confirmation email.

Oh, and here's some recommended links based on your family, er browsing history.


I feel embarrassed saying this but if we consider Alexa a certain form of “AI” then the talk I’ve heard about “VoiceOps” would be super duper helpful.

I can’t say this without sounding extremely lazy, but to be able to ask my computer “what is the IP address of that server I just spun up” and other simple-intermediate queries would be game changer for SREs and Sysadmins.


I’ve actually been talking to a lot of people recently about these kind of concepts - figuring out what kind of problems people are having that could be solved with current levels of AI speech/language understanding.

Would I be able to drop you an email or PM with a few questions?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: