This still sounds bad. 5 mins to rework your notes after each patient visit? I didn't assume doctors had that kind of time.
And let me make this clear. I, as your patient, I never NEVER want the AI's treatment plan. If you aren't capable of thinking with your own brain, I have no desire to trust you with my health, just like I would never "trust" an AI to do any technical job I was personally responsible for due to the fact that it doesn't care at all if it causes a disaster. It's just stochastic word picker. YOU are a doctor.
> This still sounds bad. 5 mins to rework your notes after each patient visit? I didn't assume doctors had that kind of time.
Compared to what though? It reads as not additional work, but less work than manually having to do all that, seems likely to needing more than 5 minutes.
> And let me make this clear. I, as your patient, I never NEVER want the AI's treatment plan.
Where are you getting this from? Neither the parent's comment nor the article talks about the AI assistant coming up with a treatment plan, and it seems to be all about voice-dictating and "ambient listening" with the goal of "free clinicians from much of the administrative burden of healthcare", so seems a bit needlessly antagonistic.
If you should ever couch its knowledge as your knowledge, I would think you could be in serious trouble. You would have to say something like "the AI's plan to treat you, which I think might be correct", when what I want to hear "my plan to treat you is: ..."
But I think it's more subtle than that, because I expect the AI to reinforce all your biases. Whatever biases (human biases, medical biases, biases that arise from what a patient isn't telling you) go into the question you feed it, it will take cues you didn't even know you were giving and use those cues to formulate the answer it thinks you expect to hear. That seems really dangerous to me, sort of like you're conceptually introducing AI imposter doctors to the staff, whose main goal is act knowledgable all the time so people don't think they are imposters...
I dunno. I'd like to give this particular strain techno-futurism back. Can I have a different one please?
> If you should ever couch its knowledge as your knowledge
Again, "its knowledge" should be "your knowledge", since it's just transcribing what the doctor and patient is talking about. It's not generating stuff from out of the blue.
What you write sure are valuable concerns and things to watch out for, but for a transcription tool? I'm not sure it's as dangerous as you seem to think it is.
> I dunno. I'd like to give this particular strain techno-futurism back. Can I have a different one please?
This sounds like I'm rewatching the early episodes of Star Trek: Voyager - the gist of the complains is the same as the fictional crew voiced about the AI doctor (Emergency Medical Hologram) they were stuck with when the "organic" doctor died.
The show correctly portrays the struggle of getting people to trust an AI physician, despite it being very good at their job. It also curiously avoids dealing with the question, why even have human/organic doctors, where the EMH is obviously far superior in every aspect of the job? Both of these have strong parallels to the world today.
I understood the entire purpose of the tool to log existing conversation (which includes the assessment and plan, since your doctor should tell you about it verbally, regardless of AI use), so "coming up" is really "transcribing".
Someone who've used to tool probably knows best though, I'm just going by what the article states.
A more accurate phrasing would be “decent job extracting a medical assessment and plan in medical language from a layman’s terms explanation to the patient”.
> 5 mins to rework your notes after each patient visit? I didn't assume doctors had that kind of time.
I worked in a healthcare for over a decade (actually for a company that Nuance acquired previous to their acquisition) and the previous workflow was they'd pick up a phone, call a number, say all their notes, and then have to revisit their transcription to make sure it was accurate. Surgeons in particular have to spend a ton of time on documentation
I think you may be misunderstanding how the tool is used (at least the version I used).
The doctor talks to the patient, does an exam, then formulates and discusses the plan with the patient. The whole conversation is recorded and converted to a note after the patient has left the room.
The diagnosis and plan was already worked out while talking to the patient. The ai has to convert that conversation into a note. The ai cant influence the plan because the plan was already discussed and the patient is gone.
AI is an assistive tool at best but it can probably speed up by reflowing text. I use dragon dictation with one of the Philips microphones and it makes enough mistakes that I would probably spend the same time editing/proofing. Had a good example yesterday where it missed a key NOT in an impression.
As aside, the after work is what burns out physicians. There is time after the visit to do a note, 5 min for a very simple is reasonable to create dictate fax do the work flow for billing and request a follow up within a given system. A new consult might take 10 min between visits if you have time.
For after hours, ER is in my opinion a bad example because when you are done, you are done.
Take a chronic disease speciality or GP and it is hours of paperwork after clinic to finish notes (worse if teaching students), triage referrals, deal with patient phone calls that came in, deal with results and act in them, read faxes etc. I saw my last patient ~430 yesterday and left for home at 7 dealing with notes and stuff that came in since Thursday night.
> And let me make this clear. I, as your patient, I never NEVER want the AI's treatment plan. If you aren't capable of thinking with your own brain, I have no desire to trust you with my health,
To my understanding this tool is for transcription/summarization, replacing administrative work rather than any critical decision making.
> just like I would never "trust" an AI to do any technical job
I'd trust a model (whether machine-learning or traditional) to the degree of its measured accuracy on the given task. If some deep neural network for tumor detection/classification has been independently verified as having higher recall/precision than the human baseline, then I have no real issue with it. I don't see the sense in having a seemingly absolute rejection ("never NEVER").
> I, as your patient, I never NEVER want the AI's treatment plan.
You as a patient are going to get an AI treatment plan. Come to peace with it.
You may have some mild input as to whether it's laundered through a doctor, packaged software, a SaaS, or LLM generated clinical guidelines... but you're not escaping an AI guiding the show. Sorry.
And let me make this clear. I, as your patient, I never NEVER want the AI's treatment plan. If you aren't capable of thinking with your own brain, I have no desire to trust you with my health, just like I would never "trust" an AI to do any technical job I was personally responsible for due to the fact that it doesn't care at all if it causes a disaster. It's just stochastic word picker. YOU are a doctor.