> This still sounds bad. 5 mins to rework your notes after each patient visit? I didn't assume doctors had that kind of time.
Compared to what though? It reads as not additional work, but less work than manually having to do all that, seems likely to needing more than 5 minutes.
> And let me make this clear. I, as your patient, I never NEVER want the AI's treatment plan.
Where are you getting this from? Neither the parent's comment nor the article talks about the AI assistant coming up with a treatment plan, and it seems to be all about voice-dictating and "ambient listening" with the goal of "free clinicians from much of the administrative burden of healthcare", so seems a bit needlessly antagonistic.
If you should ever couch its knowledge as your knowledge, I would think you could be in serious trouble. You would have to say something like "the AI's plan to treat you, which I think might be correct", when what I want to hear "my plan to treat you is: ..."
But I think it's more subtle than that, because I expect the AI to reinforce all your biases. Whatever biases (human biases, medical biases, biases that arise from what a patient isn't telling you) go into the question you feed it, it will take cues you didn't even know you were giving and use those cues to formulate the answer it thinks you expect to hear. That seems really dangerous to me, sort of like you're conceptually introducing AI imposter doctors to the staff, whose main goal is act knowledgable all the time so people don't think they are imposters...
I dunno. I'd like to give this particular strain techno-futurism back. Can I have a different one please?
> If you should ever couch its knowledge as your knowledge
Again, "its knowledge" should be "your knowledge", since it's just transcribing what the doctor and patient is talking about. It's not generating stuff from out of the blue.
What you write sure are valuable concerns and things to watch out for, but for a transcription tool? I'm not sure it's as dangerous as you seem to think it is.
> I dunno. I'd like to give this particular strain techno-futurism back. Can I have a different one please?
This sounds like I'm rewatching the early episodes of Star Trek: Voyager - the gist of the complains is the same as the fictional crew voiced about the AI doctor (Emergency Medical Hologram) they were stuck with when the "organic" doctor died.
The show correctly portrays the struggle of getting people to trust an AI physician, despite it being very good at their job. It also curiously avoids dealing with the question, why even have human/organic doctors, where the EMH is obviously far superior in every aspect of the job? Both of these have strong parallels to the world today.
I understood the entire purpose of the tool to log existing conversation (which includes the assessment and plan, since your doctor should tell you about it verbally, regardless of AI use), so "coming up" is really "transcribing".
Someone who've used to tool probably knows best though, I'm just going by what the article states.
A more accurate phrasing would be “decent job extracting a medical assessment and plan in medical language from a layman’s terms explanation to the patient”.
Compared to what though? It reads as not additional work, but less work than manually having to do all that, seems likely to needing more than 5 minutes.
> And let me make this clear. I, as your patient, I never NEVER want the AI's treatment plan.
Where are you getting this from? Neither the parent's comment nor the article talks about the AI assistant coming up with a treatment plan, and it seems to be all about voice-dictating and "ambient listening" with the goal of "free clinicians from much of the administrative burden of healthcare", so seems a bit needlessly antagonistic.