This really feels farcical, and kind of like vc bros hyping up a product. Considering this was a serious issue of life and death of a patient, it was surprising how it read like a prank or like the scene from Harold and Kumar, where Kumar plays a fake doctor and saves an ER patient through sheer luck.
I know some doctors are full of baloney but c'mon. I really hope maybe that we create better doctors that have a lot of empathy, and are able to wield technology to make themselves 10x better like we're doing with programming. But, this is not the way.
Edit: I realized its fake...I think, it has to be. It doesn't make sense, like others mention. You're not reading a 5 para speech to a patient's family while they are being treated. And also, in the article, they say to double check and verify what is being said. You better proofread that too! Or it might invent something absolutely crazy, and you may be liable.
The author is a doctor, but also an executive at a health-focused VC fund, with several AI companies in it's portfolio. Helpful context to have before reading his article I think.
I am currently looking for a new Team Lead. I interviewed a guy yesterday who performed well in a soft skills interview and so this was his tech interview with a senior dev and myself. He was doing OK at first. As the questions got a little more nuanced, there were these pauses. Eventually, he got flustered and asked to resume later.
We agreed to this since he said he'd been dealing with a production issue for several hours before the scheduled time (I had offered to reschedule at no detriment to him before we began but he insisted on doing it then).
The developer and I met after this first interview and discussed his answers. We both thought he was "googling" (we are older people, stuck in the past).
Later that day, back on another call with this guy, my senior dev starts typing my questions into ChatGPT and getting back essentially the answers he's giving us. I start with more nuanced questions and he's done. He just kept reiterating platitudes could not explain why he was saying them, etc.
I felt bad for him, he was trying to get a job beyond his technical knowledge. But ultimately, this was dishonest and I would have found out very quickly he was not ready for the job.
If anyone is wondering, I gave him a chance to tell me why he was pausing for so long before answering and why his answers didn't seem to directly address my questions and he just insisted he was NOT pausing (he was, for up to 30 seconds at a time). It was generally just a really disappointing experience for all of us, I would imagine.
> ChatGPT is excellent at writing "fluffy" pieces full of empathy, compassion, PR talk, politician speech,
I think you're putting a bunch of completely different things in the same basket.
Making the text more fluffy will not make it more empathetic, not only (but mainly) because there isn't anyone to empathise with the respondent. Our bullshit receptors are good spotting dishonesty, so it'll just sound cringe, putting it at the same level as PR talk. But that's not empathy or compassion, just cheap and obvious packaging.
> As an engineer who is unable to produce such writing, this tool is quite helpful!
I genuinely thought you were being sarcastic here. Of course you are and you don't need to use more filler words for that.
I do think that there's value in using GPT for training purposes here, that is learning how one could express themselves in a more context appropriate way. This is not much different from stylistic advice, e.g. in many languages a one word yes/no response to a question is considered neutral, whereas in English it can be considered rude/abrupt, so people use question tags more often.
> Our bullshit receptors are good spotting dishonesty
Is this a bad joke? What exactly leads you to that conclusion? People fall for dishonesty and straight up lies all the time. We're awful at recognizing it. Maybe you're the one in a billion who can spot a lie at a million miles, but the vast majority can't.
>Making the text more fluffy will not make it more empathetic,
Of course not, but empathy in communication (not in action) is full of fluff. It almost requires it.
>Our bullshit receptors are good spotting dishonesty, so it'll just sound cringe
All "genuine" affirmation for the sake of empathy sounds cringe to me. I'd rather the doctor devoted their time to doing their job well instead of trying to make someone feel heard and validated - especially in an ER scenario where they are juggling critical patients. This tech can help with that.
> Of course not, but empathy in communication (not in action) is full of fluff. It almost requires it.
Empathy doesn't require fluff at all. (Think of all the short, poignant messages you've sent or received when someone is upset.)
The corporate need to not give ground on a complaint is where all that reassuring, repetitive, empty BS comes from.
> All "genuine" affirmation for the sake of empathy sounds cringe to me. I'd rather the doctor devoted their time to doing their job well instead of trying to make someone feel heard and validated - especially in an ER scenario where they are juggling critical patients. This tech can help with that.
Honestly with "for the sake of empathy" it sounds like you really don't place a high value on empathy (which is not unusual, or necessarily wrong). But if that is the case you're quite obviously not the right person to assess whether ChatGPT and the like can "help with that" in that context! :-)
>(Think of all the short, poignant messages you've sent or received when someone is upset.)
If it is short and not fluffy enough, it risks sounding dismissive. Those short messages generally pave way for a deeper conversation about the subject.
>Honestly with "for the sake of empathy" it sounds like you really don't place a high value on empathy (which is not unusual, or necessarily wrong).
Not really, I think empathy is important in the right setting, it is not the most important thing. Certainly not in the ER if the doctor is overworked and have lives at stake. If they have the bandwidth, sure. If not, can't blame them.
>But if that is the case you're quite obviously not the right person to assess whether ChatGPT and the like can "help with that" in that context! :-)
Disagree. I know how to sound empathetic for those that need it. Some people need words of affirmations and validation to be lifted. I am not one of them but I understand. It is not that hard. Modern LLMs are more than capable of creating the prose for that and more. There is a time and place for everything though. My empathy generally drives me to action and solving problems.
> I know how to sound empathetic for those that need it.
But that isn't actually empathy, and people can tell the difference.
> Some people need words of affirmations and validation to be lifted.
Not the words. The understanding and sharing that underpins the words.
My point is this: if you think empathy can be faked successfully, you simply aren't the right sort of person to decide whether the results of automated faking with an LLM are valuable to the listener.
Because people can very often tell when empathy is being faked. And when they do discover that empathy is being faked, you are not going to be easily forgiven.
Empathy implicitly involves exposing someone's feelings to the air, as it were, in order to identify that you understand and share them. So faked empathy is variously experienced as insulting, patronising, hurtful, derisive etc.
Using an LLM to create verbose fluffy fake empathy is going to stick out like a sore thumb.
If this isn't something you find easy to understand at a level of feeling, don't fake empathy, especially at volume. Stick to something very simple and an offer of contextually useful help.
> My empathy generally drives me to action and solving problems.
I think this is noble and valuable, and I would in your shoes stick to this. Offers of assistance are a kindness.
But you should never pretend to share someone's feelings if you don't share their feelings. Especially not in volume.
Depending on the situation, can make it actively harder as well. Do I spend the time to verbally fluff up the relative of a patient or go and tend to 5+ other patients that need critical care right now (the example in this article)? If the first one is expected of me, care suffers and the job is harder. I am not dismissing the needs the relative of patient has, don't get me wrong. But in an ER setting it rarely is the priority. If some tech makes it easier to more effectively give that need some additional bandwidth, that is a good thing.
This touches on something, the same reason that "I'm sorry" apologies should not be written by chatbots - for some things it's most important that a human being did something, more than what they did.
When a big mistake has happened and you need to apologize, it had better be coming from you yourself. If you think you can improve it by delegating it to a fancy autocorrect software, you're missing the point of why we apologize. Or if you think a bot is more 'empathetic' than a human paying attention, you've lost touch with what it means to be empathetic. A bot can't even feel, let alone feel your situation vicariously.
Many comments are saying it's helpful. But what happens when it's widespread, everyone crafting Hallmark type empathic messages to everyone. Or once everyone has bots, then most likely they will all be sent straight to spam or something. We will be inundated with such messaging.
"Chatgpt, write an email based on these bullet points!"
"Chatgpt, summarize this email into a few bullet points!"
Somehow, I doubt we'll notice. The reason we don't send the bullet points email today is that the receiver will think we're angry with them. In University the engineers all needed alcohol as a social lubricant, if chatgpt replaces it I expect we'll trade hangovers for higher electricity bills and carry on.
I look forward to AI-generated sappy Hallmark-like messages, because they're bound to be better written than the same types of messages most humans write today.
The sentiment will be no less fake. It'll just be better written.
We already do that during normal communication - people rarely say what they mean in the simplest way possible. We couch the message in all kinds of extraneous details. The tediousness of which is part of why some people want to use LLMs to generate messages for them in the first place.
Stripping that out might ironically reduce the need: Of people come to expect their messages to be automatically stripped of fluff, why include it in the first place?
It's exactly because most of us can write concisely that there is a perceived benefit to use ChatGPT and similar to expand a concise instruction into a longer message that meets the social expectations. Those social expectations are what often prevents us from sending those concise messages to people today.
I presume the issue is detecting when the meaning is intended or not. Like if a doctor uses it to communicate their care vs a salesman uses it to get a deal?
It's not actual empathy. It's writing that seems empathic.
The problem is that, exposed to enough writing that seems empathic but isn't, a person will learn that empathic writing is not sincere. We lose that as a society in the same way that if I check my email and see a "YOU'VE WON" subject I don't get excited, because I know it's fake.
Do we really want that? A world in which when you see an empathic message, the first thing you think of is that it's fake? I find the idea to be quite sad.
You say that as if it is known fact but I don't think that is a given at all.
The converse could be true: maybe being inundated with fake empathy is almost as good as being inundated with the real thing, ala placebo effect (or the studies which have shown that just forcing yourself to smile can actually improve your mood).
Empathy and compassion isn't in the words. It's in the intent behind them.
Faked empathy literally isn't empathy. (Fake compassion is a little more arguable; there are clearly situations where faked compassion is better than no compassion.)
Not everyone needs this or can do this on the same scale. I have really had to learn to find empathy and be unafraid of it, and I definitely have nearly zero tact (if you have no tact, you practise thinking good things).
But if you need to be able to practise empathy and compassion regularly, and you are incapable of it, find someone who can to that for you. Celebrate their ability to do it and you'll benefit from it yourself. Trade them something they don't do so well.
Helpful until someone reading your “writing” realises it’s AI and that you’re a robot who needs AI to write something with empathy or compassion. I’d much rather hear someone’s really voice in their writing than something false. It reminds me of people in certain places who are fake nice and fake positive 100% of the time. It’s draining to be around them.
It is also draining to be around people that always require feeling heard and validated or else they get crabby. Two sides to everything. The doc in this story probably does not require AI to write something compassionate. He simply does not have the bandwidth to do it while juggling critical patients and their family while they are not taking no for an answer and insisting on inappropriate treatment.
Isn't that why there are nurses, or maybe we need in-between positions of people who actually do the diagnosis, surgery, and those to explain them. Wait, those are nurses?
In a more scheduled hospital / care setting, yes. In ER care setting (the example in the article), it is a lot more chaotic. Disseminating care information to people further down / up the hierarchy takes time that they might not have.
That makes sense assuming engineers don't need to practice this skill very often. But doctors need to do fluffy, compassionate every shift.
Nothing about the story makes sense. Doctors and nurses say these kinds of things to patients every day (I'm an EMT, I've watched them do it and I do it myself). I'm seriously wondering if the relatives were responding to the fact that the clinician was visibly reading from a piece of paper and that somehow made it more 'official'.
PR talk and political speech, I agree. I wouldn't call this fabricated shit "empathy" or "compassion", for both, you need a little more then a LLM fine-tuned by SV lefties. In fact, whenever ChatGPT tried to be empathic, it feels very pathetic.
Ironically, your comment about the state of doctors is quite lacking in empathy for the doctors themselves.
Regardless of whether or not this article is a fake, I think you don't quite understand how overworked most doctors are, and how little time they have been given per patient to take care of what needs to be done.
Remember that the AMA has been limiting the supply of doctors in order to maintain high wages, and the working conditions for interns/residents amounts to hazing.
That’s not how that works. The AMA doesn’t control the number of physicians. 20+ years ago the AMA supported congress reducing the number of residencies because at the time there was a predicted glut of doctors.
However the AMA has reversed that position and supports increasing the number. Again though they don’t control the number of physicians.
Putting aside the question of if the linked is fake, I don't necessarily fault medical workers for handling their work dryly.
Those doctors and nurses and technicians have to deal with all manners of disgust, biohazards, sadness, and most significantly death each and every single day. I cannot in good faith demand them to treat everyone with empathy, that might as well be psychological torture for the medical workers.
This isn't to say they shouldn't be courteous, professional, and kind to their patients, that should go without saying.
I understand this point. Many doctors I see always being very social, and smiling in public even though they have seen horrors in the ER room, etc. I kind of get a bit of the callous, emotionally stable, and always cheery, positive nature but isn't too emotionally deep (think of the doctor in movies, where they give the bad news with a straight face) they need to have to be able to operate like a robot, ironically enough.
But, maybe instead of what this article proposes, we do the opposite, we make our doctors more empathetic, and leave the robot to do the grunt work, machinery, surgery, etc. A lot of comments are saying they find the empathy useful. I'm not sure if I will be able to tell if someone sends me a crafted message, but I don't like the idea of a message being sent by ChatGPT that is meant to artificially create empathy, to me it's fake empathy.
This is all theory, but I don't think robots creating fake empathy would resonate with humans once this is widespread. Maybe it creates a sort of disconnection, where people just blatantly avoid falling for text messages and paragraphs that sound empathetic.
There are some similarities too with the movie Big-Hero 6, and the Bayman robot that was created as a care robot. Initially, Hiro is very annoyed by it, because of it's rote "artificial empathy" voice and messages. But it's intelligence is what brings him around, when it understands things like contexts better.
> Maybe it creates a sort of disconnection, where people just blatantly avoid falling for text messages and paragraphs that sound empathetic.
That would be good to a degree. Right now, people are constantly falling for maliciously crafted empathetic/emotional messaging, coming from the mounths and from under the pens of journalists, salesmen, politicians, advertisers, pundits, and social media influencers.
In some sense, it's really saddening that people find issue with emotional text written by a bot, while they don't seem to find any problem with being constantly subjected to malicious emotional messages from aforementioned ill-intentioned parties.
ER nurses doing triage are some tough mother*ers.
You walk in, bleeding and pretty sure you will die soon,
The nurse takes one look at you and is not at all impressed.
"Yeah, take a number, and keep pressure on the wound while you wait"
or
"We are really busy right now. You will have to wait for many hours.
You dont really need a doctor. Just do ... ... ... and it will be fine. "
One thing I have learned in life is that if you are at the ER
and you have to wait a long time. You are lucky.
It is when you are rushed into the back right away you know that
Whatever has happened, it is severe, and you should be scared.
I'm going to go out on a limb and suggest that most people on HN have some experience of the patient family side of medical problems, some of them no doubt significant and regular.
We're not doubting the medical situation the ER doctor agrees sounds plausible, that GPT could have produced that response or that doctors finding it difficult to communicate with family members in intelligible and empathetic terms is a real problem.
But if you're the sort of person that would "melt into calm agreeability" when basically the same explanation was offered with generic corpspeak appended (i.e. they'd trust the doctor more if he was answering your question by reading from a script so non-specific and non-empathetic it finished up with "if you have any questions or concerns please contact the medical team"!), and would also be delighted to get the same script read out to you each time you asked a different staff member the same question, I think you're very much in the minority. Certainly in my limited experience, I can assure you that I was more reassured by the empathy levels of professionals completely misunderstanding my question and thinking I was threatening to make a complaint about their standard of care than I would have been if they pulled out an index card and read that they were all doing their best, the treatment is [boilerplate], please do not hesitate to contact medical team in case of any questions and we will reread index card to you.
At best, I suppose, my reaction might be to interpret the index card boilerplate repetition as a polite way of telling me to fuck off and not offer any followup questions. You could even make an argument that this is medically useful; a more specific version of "here's a leaflet" so they can get on with doing their job.
But as written where everybody loves the GPT boilerplate, the story reads like a classic of the LinkedIn/politician people miraculously came round to supporting me and everybody cheered genre.
My wife, an ER doctor, doesn’t believe that the doctor read the script to the patient. She thought that was completely absurd. She started laughing when she got to that part of the article.
For what it's worth, I'm a doctor and I find this story hard to believe. It does sound to me like wishful thinking by an AI hype guy.
In my experience, these sorts of situations happen when two things come together:
1. The patient's family has just enough medical knowledge to fall onto the wrong part of a Dunning-Kruger curve.
2. The family has certain beliefs about the medical establishment. Namely, that medical staff (especially doctors) are trained in medical school to have an inflexible way of thinking and that the doctors are dismissive of the family's proposed treatment because they are married to textbook thinking and rigidly following hospital protocols.
#1 happens all the time - maybe even the majority of the time - but it isn't enough to cause conflict on its own. #2 is the special sauce that makes the situation boil over.
What is the absolute last thing you want to do given #2? Probably something along the lines of handing every member of the staff a pre-written script to recite every time a family member asks questions. This will make them feel like they are being stonewalled, not like they are being listened to. This will confirm their fears about point #2, not ameliorate them.
I can't say I've ever seen someone print off a script quite like this before, but I have seen some doctors piss off patients/families by relying too much on pat phrases or repeatedly pointing to a particular clinical guideline or hospital policy as the rationale for their decision.
> The family has certain beliefs about the medical establishment.
Namely, that medical staff (especially doctors) are trained in medical school to have an inflexible way of thinking
This belief saved my mother's life.
We came with complainst of lung issies, for some reason doctors decided allergies are mosh likely but we wanted an X-ray. Months later they finally did a scan and found lung cancer.
I don't know if they are following a flow chart, but when dealing with NHS doctors I sometimes feel like I am talking to a bot.
They have the same routines, for exanple they will never ever check for Vitamin D defficiency, etc.
I cant imagine what will happen once real bots / chatgpt becomes widespread.c
Well as with most beliefs, sometimes it's wrong and sometimes it's right.
I don't know anything about your mother's case, and I don’t personally have any experience with the NHS, so I can't say if it would have been appropriate to get a CT earlier in her case. Maybe it would have been.
I do feel that we as doctors can be too rigid in our decision making sometimes, but my thoughts on this are quite nuanced and probably very different from the average layperson's. It's probably also a multi-page writeup, so I won't get into it here.
In general though, CT scans cause cancer in about 1 out of every 1000 cases [1], so if we ordered them on everyone who asked, we could very well cause more cancer than we diagnosed.
In the US, we already have system that is much more patient driven and much more aggressive on testing than most of western Europe, and we still have worse health outcomes.
I agree with your concerns about Chat GPT. Patients want face to face time and I think they deserve that. Unfortunately that's low hanging fruit for C-suite executives to pick at when cutting costs.
I don’t think that first one is an ER doctor (having ER experience is not ER doctor - at least until COVID every medical student had 4 weeks of “experience”) - you’d just say you’re an ED physician.
But more pertinent, on reading it and a reply I think what is being deemed realistic is the scenario (ie patients family arguing treatment) but that’s not terribly controversial, the followup comment even admits that they would not feel good if they were presented with an AI readout: “If they are not processing the information then you have to allow for that, but I do think some relatives would react very badly if given a printout from an AI to explain a situation”.
So I don’t think this makes your case as strongly as you think.
The other one is hearsay agreeing with the “premise” which is a charitable way of saying the story itself is likely bullshit - and in the end disputes that ChatGPT is useful here (go actually read the comment).
For what it’s worth I think the meat of the story (ie the use of ChatGPT) is farcical.
One, I’ve been on the other side even before medical training, this idea that no doctor could imagine what it’s like to be on the other side is stupid. When I had questions about my dying family member prior to any medical training if the response was to pull out a canned script of what are clearly corp speak platitudes, I would have requested a different provider or a transfer.
And finally, pulmonary edema. Good example for an uninformed audience that doesn’t know better, but this is waaay too common - an ED doc in a medium busy service can easily see 10 presentations a day. This is so basic that this type of conversation happens many many times over (commonly multiple times a day for this same presentation). Answering family questions and addressing objections over routine presentations is something that an attending should have been doing for nearly a decade at least.
In academic medicine we talk about how to do this better but I see no evidence this ChatGPT nonsense presented has anything new to add.
I believe it. I've used chatgpt for inspiration on how to craft difficult messages with empathy. I've been managing people for a long time and could've done it independently, but it's good to get help. It can be tough to express empathy when exhausted after a long day of urgent demands and responsibilities.
This is no different than asking chatgpt to draft code that I could otherwise write from scratch. It's like having an assistant.
I get the approach of stylistic advice. I think that people in this thread have two separate conversations, one about an empathetic style and one about empathy.
Empathy means sharing (and often uncomfortable) emotion with another person.
If what the text represents is true, i.e. the model helped you express you really feel, but you're too tired to write it, that's sounds like a net positive.
Although in my experience if I'm too tired to sound empathetic on my own, I'd rather avoid messaging people at all or put whatever message I have for them in context, e.g.: "Hey, It's been a long day and I'm knackered, so I'll be brief: ...". People appreciate being direct and honesty, but I don't work much in places like large corps any more where communication is already very formalised and "templated empathy" just follows the existing practices. (not saying that you are, just giving an example here).
Now it feels kind of neat, and maybe works when others are reading well crafted messages. What happens when everyone has a bot, and a bot is talking to a bot, because I for one, don't want to be sent or fall for emotionally manipulative messages, or that try to sound empathetic, maybe even are empathetic.
When this type of messaging is so widespread, I wonder what will happen. I think people will ignore most of this type of messaging. Maybe we'll evolve to not be emotional anymore, when emotions are everywhere? I don't know, but an interesting hypothetical.
The person sending the message is accountable for its content, no matter how it was authored.
Maybe I want to be tactful around a delicate issue, but am struggling to find the right words. An LLM can help me. Alternatively, someone who wants to be manipulative and deceitful might also optimize their message using an LLM.
The tool is of course not inherently good or evil. But one can use the tool for different moral outcomes.
But we can quite easily feel the difference between these ChatGPT messages which are too much and a genuine message.
We know corporate BS when we see it.
And in the example in this post I think the doctor said more or less the same thing to the relatives as the ChatGPT message, the biggest difference was that with the ChatGPT message he sat down with everyone around and reading it out load. Had he stopped and given a few seconds/minutes with all relatives around him and talked with his own words he would reached the same effect.
The other effect is that it was written down. Written texts seems more authoritative than words that just come out of a mouth. So reading from a paper sounds more fact like than just speaking. He could have taken any paper. Or have prepared papers for the most common questions.
Will it really be a difference? Corporate speak is already a thing, people already feign emotions and sympathy. Sure, this is more, but it feels to me like more of the same; I don't think it even magnifies the problem that much.
I don’t see how these are different from any of the usual platitudes everyone commonly uses. I’d worry about everyone sounding the same but nevermind because we all use the same turns of phrase already.
I only believe it as a cynical marketing piece from an executive at a health-focused VC fund with several AI companies.
This could spectacularly have backfired: "<expletives>, what kind of doctor are you? Having to use ChatGPT to treat my mother. Do you even have a medical degree?"
When communicating difficult news, you have to be concise. The ChatGPT-generated response is very fluffy.
ChatGPT can also hallucinate, so you must be extremely careful when using it at the end of your shift.
Fake or not, people are incorporating ChatGPT into their work. We had a pipeline break today and when we looked at the PR that introduced the change it was in the PR notes that the tooling configuration was derived from a ChatGPT prompt. What do you even say or do about that?
I've been embarrassed by a Copilot code snippet that I proofread to seem correct, but was incorrect due to some subtle comparison logic.
I can only imagine the risks in professions that aren't dealing with logic that has a very finite list of options or where the answers AI produce require researched effort to validate.
It's probably worse in law if you're given a case reference as you're professionally supposed to read the document in substance to not be muddled by selective quoting, then if it's from a foreign jurisdiction you have to also check if it's applicable to you.
> I can only imagine the risks in professions that aren't dealing with logic that has a very finite list of options or where the answers AI produce require researched effort to validate
This can be done the same way as it’s done in the non ai-assisted process. Mistakes are made then the trained professionals correct the course of action. The problem is with people dealing and selling the problem like it’s a simple logical procedure. Then the layman will think that since a machine crafted the answer, and since machines are logical, the answer must be right.
Some software mistakes can cost millions ... not saying it happens all the time but it can happen. We SHOULD try to be as accurate and bug free as we can in this business its no joke.
Yeah, sometimes its just an html div that looks funny (that can also cost the company depending on how embarrassing it looked and how much traffic was impacted). But sometimes its backend logic that screws up important data or who knows what with very real reputation or financial consequences to the business.
This is a good question. There are no tests for tooling but the tooling can result in pipeline failures. Theoretically the pipeline should've failed outright when they went to merge. Instead it failed on specific conditions the developer didn't account for and weren't familiar with because the configuration wasn't something they were familiar with because they weren't building it from docs.
Some specialties attract different personalities. If you’ve ever been in a hospital setting around a lot of neuro residents, many of them will present… oddly. It’s a feature. Likewise, a lot of ER docs are kinda adrenaline people who are great at triage and quickly stabilizing you, but maybe not so much good at relating with people. That’s a feature — you don’t need a soft touch dealing with trauma.
If ChatGPT can help someone perform better in their area of weakness, great. If it can help super smart people get information out of their heads, great.
Have worked with many poor communicators and outright idiots but there is virtually no one graduating a decent medical school with native speaker skills that cant explain in simple terms why giving more fluids to someone that is overloaded with fluid is a bad idea (oh and without using any jargon like edema or diuresis, another thing missing in this whole farce, any 1st year let alone decent attending will explain clearly and simply what is being done. you’re not giving fluids then what the fuck is your plan doctor?).
I’ve yet to meet the combo of poor communicator/idiot that would believe reading a canned “empathy” script would somehow be helpful.
Seriously is there anyone here that would feel confident in their doctor if they came reading a plan off a sheet like a teleprompter?
This guy is trying too hard and assumes most of his audience won’t realize how idiotic this sounds.
I disagree with the first part, mostly because I believe doctor-speak is a skill medical professionals use to deal with patients which in their desperation, think they know more than they do.
Yet I fully agree with the second paragraph. The idea of the teleprompter doctor working is so absurd it would be rejected immediately if not because chatgpt hype.
The teleprompter bit makes no sense. Do you feel less confident in your doctor after learning that any kind of serious treatment usually involves a checklist?
Following a script isn't a sign of lacking skills - it's a sign of being wise enough to recognize human limitations. Medical care isn't improv comedy.
Looking at notes or a paper is one thing and checklists and timeouts serve an important role in safety, but being unable to hold a conversation about a fundamental concept without a script is quite another, and it is lacking a skill by definition.
Really depends on the situation. I have no doubts a typical doctor could easily ELI5[0] you anything about the procedures or treatments they're administering - once well fed, well rested, and somewhat relaxed. But if you're catching them as they're overworked and asking them to context-switch on the spot, well... I'd expect it work just as well as it would with software engineers. That is, if I was subject to random ELI5 requests during a busy work period, you'd bet I'd start preparing notes up front (and probably put them on a Wiki, and then give the people asking me a link to that wiki, and politely tell them to RTFM and GTFO).
--
[0] - "Explain Like I'm 5 [years old]". It's Reddit-speak, but describes the concept quite well, and there isn't a proper single word alternative for it.
First, it really depends on the speciality and the emphasis on the clinical practice within that specialty.
But I'll address "I'd expect it work just as well as it would with software engineers."
This is not a great comparison. Orally presenting patients to peers and explanation to non-experts is a core skill of medical training and fundamental to clinical practice. Understanding the essential fundamentals of a handful of extremely common conditions that can be explained in simple terms is something that the majority of doctors will have to do thousands of times throughout residency possibly at the tail end of a 24.
[I'm actually not in favor of it - because it often borders or is hazing with little verified educational value - but a historically common and still present practice in some places is making the intern present critically ill patients to the day team and attending after they are coming off of a 24 hour shift without any notes. Speaking of context switches, they will often have 5 or more patients like this. Even if not this extreme, the point is, doctor and software engineer training have little overlap (eg why numerous fools trying to make the next stack overflow for MDs have never succeeded).
As for this example, I can't overemphasize how common pulmonary edema and volume overload are presenting findings in the ED. This is like an experienced programmer going to ChatGPT to explain the addition operator in Javascript, which you could do, but would it be unfair to expect someone to come up with an explanation on the spot? Maybe, but then probably medicine is not a career for you. It does emphasize a different set of skill sets. And yes picked that example on purpose, because maybe one doesn't remember all the stupid implicit type conversion rules but one can still come up with a basic explanation.
As a doctor with a mixed socio-economic patient population if I just say "your family member has pulmonary edema" I'm already getting a fucking blank stare much of the time, especially if it is new. I might as well just tell them they have dropsy. I can even tailor patient education to the audience as well after 10 years of doing this for a living, maybe someone wants an ELI6 (it is often the tech people, some of them are alright - though that crowds tends to often want the iamverysmart explanation - or like many on this site just explain why I'm an idiot that knows their field less than they do).
As for specialty, people go into pathology and radiology for instance to avoid all this once done with medical school, but that is only a portion of doctors. ELI5 is relative too - consulting subspecialists and radiologists must ELI5 to their more generalist colleagues.
> asking them to context-switch on the spot
I mean this is part and parcel of hospital medicine. 5 years out of residency one can easily run through a list and handoff 20 patients with a few notes on a single sheet of paper, whereas a medical student will often have an awkward folding clipboard with a ream of notes for their 3 patients.
> If I was subject to random ELI5 requests during a busy work period, you'd bet I'd start preparing notes up front
I need a few notes to remind me of a patient's clinical status in reference to their core problems. I absolutely do not need notes on how to converse with patient's which are 99% of the time the same things over and over. Despite what's on TV, medicine is overwhelmingly routine - the drama is usually the social issues.
> and probably put them on a Wiki, and then give the people asking me a link to that wiki, and politely tell them to RTFM and GTFO
Yeah, the starting point of this approach for the typical doctor and the typical techie are unsurprisingly starkly different. Telling patients to "RTFM" isn't particular winning -- and there is a whole field of academic study simply related to Patient Education.
I’m calling bullshit on the scenario where a distressed family member which is low key claiming you’re mishandling the patient is suddenly calmed when confronted by a doctor that is reading a script filled with claims of empathy and care.
Reading the story really belies the real issue - he’s an important doctor guy who is too busy to help a couple of 70 somethings understand what he knows. I say that both in the cyclical “what an asshole” sense and in a real sense - there’s a guy having a stroke 4 bays over. That’s a sort of paradox of medicine.
I think the canned script will be handed to a less important PA or NP who will use it as a conversation guide.
It only sounds idiotic with sufficient levels of cynicism coupled with a lack of empathy.
Two years ago I rushed to the ER suffering from pulmonary edema caused by misdiagnosed endocarditis which required emergency open-heart surgery. The most qualified of the surgeons were very terse and very busy with, you know, prying people’s rib cages open and such. Do the math and think about what else the ER could have been dealing with at that time…
Alright, I'm going to miss the point entirely here, but I'm swinging any way: if I recall correctly, Kumar had basically studied and become a doctor, but was scared to take the final test due to his fathers pressure yada yada something like that, and it wasn't through sheer luck he saved the patient.
Yes well summarized. I understood this too, and replied in another comment, which is how doctors many times come across as callous and really like robotic people, but it's bc they have to be ready to take on another case. The people who are sensitive don't stay for long, or make it all the way.
One one hand I feel, many developed that way as a means of adjusting to their profession. I'm not sure if they necessarily are the happiest versions of themselves if they are like that. What I mean is if they could choose to be more empathic without suffering would they? I'm thinking they would, as it's a human thing to feel more. Reducing the suffering is another point to solve. Essentially, I don't know how comforting AI's would be as opposed to another human.
In that case, how can we make robots/AI do the dirty work, and make humans do the more empathic work. Maybe have separate doctors that just do the communicating part, while doctors+specialty AI robots(in the short term transition until it's fully autonomous robots?) do the surgeries, etc that require steady, stable minds, and lower empathy.
I'd bet money that the Medium article has also been, in part, rewritten by ChatGPT.
Here's why: the story being told is that of a doctor, needing to communicate, being able to get through to 'regular folks' through talking down to them using ChatGPT (which is actually well suited to that task, once you acknowledge that hallucination can also mean losing stuff in translation).
Then, the article itself, including the description of the loving family's reaction, is VERY characteristic of the way ChatGPT 'talks down'. It's hallucinatory happytalk that paints with a very broad brush, and delivers not the tone you want, but an abstracted 'popularized' tone likely to convince large numbers of stupid and gullible people.
That tone's hard to miss, and it's popping out all over the article.
Points for committing to the bit. However, what's pictured is clearly a level of Hell, or possibly an out-take from Idiocracy. You won't believe what happened next! :D
"...hallucinatory happytalk that paints with a very broad brush, and delivers not the tone you want, but an abstracted 'popularized' tone likely to convince large numbers of stupid and gullible people."
Only because I've written many words and had very specific concepts, 'essence', to represent with words. What I'm saying is not 'popularized', it's a set of very specific points.
You could GPT-ize what I said into 'feel-good talk' or 'hallmark cards' but you would lose the concept of 'the very broad brush does not deliver the tone you want, but a generalized one that will reach more people through being already based in how they see things'.
If you need to get a mass audience to see a new thing, ChatGPT is NOT your friend… it will tell them what they already know… the wild thing is how the original claim is just this, 'ChatGPT got an audience to see a new thing', but the ChatGPT-style payoff of 'they smiled bovinely and accepted the new information' is clearly what the ARTICLE AUTHOR wanted to hear, and thus it's a really suspicious take.
A quick way I've found to recognize ChatGPT-written text is upon reading it aloud, it 's kinda obvious how constant and monotone it sounds, no matter the chosen style. It gives a very "metronomic" rhythm, almost like a krautrock song. It's kinda obvious once you notice it.
It is funny that you mention that. I've been feeling increasingly self-conscious about my own writing on the internet now because that's how I naturally write and talk, so I wonder how many people in the future are going to think it's just another AI comment written by ChatGPT.
I am not convinced, largely because ChatGPT has to have been trained on exactly this kind of pre-existing hallucinatory happy talk.
It has always struck me as not coincidental that it writes, long-form, like a college admissions essay, apologises for massive error breezily like a tech support person, and has the confident certain tone of a Californian software developer.
There were, after all, these humans involved its development.
Yes, the tone of voice that comes from ChatGPT reminds me of when you average a whole lot of colours and to just get gray. Or those "studies" of "here's a composite of every face in the world" that's just a pretty generic round featureless face.
I'm having a hard time buying this anecdote. Not even the bit where its implied that multiple people read the same script multiple times and that worked for soothing people, but the actual content of the before and after; the gpt version doesn't explain it any differently and the 'compassionate' writing was absurd.
My most charitable interpretation is that the anecdote is true but the article itself was written by some one else and is more of an 'inspired by real events' situation.
Side note; does anyone actually think that chat gpt succeeds at rewriting things 'compassionately' or 'more intelligently'? From what I have seen the actual output more resembles caricature than the requested results, in the same way that corporate communications frequently parody earnest communication.
I’m really not trying to be inflammatory when I say this, but highly technical professions that have intense training requirements for entry attract a disproportionate amount of people with social development issues, ranging from intense self-diagnosed “introversion” to full blown developmental disorders like autism/aspergers. There’s plenty of this in software development, but medicine takes it to the next level. The demands of the curriculum and placement training basically ensure that a majority of the successful entrants to the profession are highly anti-social, as anybody with an interest in having a healthy social life would almost have to give it up to make it through that selection process. My thesis is that people with these personality traits legitimately find the synthesised “compassion” or “empathy” produced by LLMs to be legitimately insightful.
I chuckled at this because I confirm this be true among my acquaintances and theirs who are doctors.
I hate to stereotype, and generally, they're all decent people. But almost all of them have undercurrent of arrogance that permeates through despite their best attempt to keep it in check. It's pretty funny.
I've been in situations where reviewing a selection of sentiments as presented by hallmark cards could have improved my response so I can imagine something like this happening.
But this article specifically convinces me that it was not written by some one with those experiences.
Also, I've got a friend who started med school in their thirties, hearing about it from him I do not think your supposition is particularly inflammatory. I went through an engineering program which was time consuming and generally considered challenging and I still wonder at what sort of internal alchemy keeps proto-doctors in the program and not doing literally anything else with their time.
There is a kernel of truth to this, disproportionate sure, but I think majority is a bit over the top. There are plenty in medicine aware of this, antisocial behavior and toxicity in general, there’s a lot of crying that goes on from medical school throughout training. You can easily have a rich social life throughout training but the process doesn’t effectively filter out the trainwrecks either.
> You can easily have a rich social life throughout training
This is the complete opposite of my experience, and I’d be very surprised if there were any med schools where this was legitimately possible. When my wife was in med school she had about 40 hours of course work per week, and the curriculum was so dense that she spent nearly all the rest of her time studying. She worked (a job) 10 hours a week during med school and we’d be lucky to have a couple of hours of dedicated time together per week. Most of the time we spent together was when I’d help her with revising. During semester breaks she had mandatory placements to complete, and her school operated classes on all public holidays. It only eased for a couple of months between academic years (when she also had a non-trivial study workload to get through). During her residency this got worse when she had irregular 18+ hour shifts to get through, and just wanted to spend most of her time at home sleeping. This was the same experience that all of her peers had, so either you or my wife must have had an atypical academic experience, or you have a substantially different definition of “rich social life” than I do.
> There are plenty in medicine aware of this, antisocial behavior and toxicity in general
To be clear I’m not referring to toxicity necessarily, or antisocial as in being destructive or disruptive. I mean averse to social settings, and not capable of or interested in participating in normal pro social behaviour. Nearly all of her peers at med school had these characteristics, if somebody told me that 90% of doctors had some form of autism, I really wouldn’t have a hard time believing it. This is very similar to the experience I had in all of my maths, physics and comp sci papers at college, and to my subsequent experience working in software development. Granted it’s not quite as severe in that setting, but the poorly socialised software engineer stereotype didn’t just materialise out of thin air.
People do find these highly contrived LLM outputs to be insightful for their “perspectives” on compassion or empathy, which is totally unrelatable for me, but this just seems like a reasonably plausible explanation.
> This is the complete opposite of my experience, and I’d be very surprised if there were any med schools where this was legitimately possible.
Well then your experience is limited.
Most people don’t work a job in medical school, so that’s time right there. Also the format of US medical school has changed considerably in the last 20 years. There is plenty of free time in the preclinical years, very few attend lecture (preferring to use a number of commercial review/prep product), there are usually only a few hours a week of mandatory team based learning groups and some hours of labs, this format is overwhelmingly common now in the US. Time management is also a large factor, some are better prepared to deal with the gauntlet.
Clinical years will have periods of intensity but usually for few week stretches, surgery clerkship might be intense, but psychiatry which is also at least 6 weeks will often be close to 9 to 5 with free weekends or even less. Time during residency training is extremely specialty and program dependent. Is your wife OB/GYN or general surgery, that’s very different than medicine, anesthesia or dermatology.
> Nearly all of her peers at med school had these characteristics, if somebody told me that 90% of doctors had some form of autism, I really wouldn’t have a hard time believing it.
Yeah whatever. There’s a lot of pricks and socially awkward people in medicine but a majority are primarily still interested in going into a service oriented industry working with people.
Just because some idiot with something to sell posts an apocryphal story about ChatGPT doesn’t prove your point.
Having worked in both tech and medicine your take is just selection bias and a limited view. Such a verbose response to say very little as well.
> Also the format of US medical school has changed considerably in the last 20 years. There is plenty of free time in the preclinical years
You're right about that, it's gotten worse. The perspective you're offering here contradicts basically all documented perspectives on the intensity and time demands of medical training.
> Concerns about excessive content and curricular overcrowding in medical education have existed for more than a century. In 1910, Abraham Flexner noted that the packed medical school curriculum would “tax the memory, not the intellect.”1 The Rappleye Commission Report on Medical Education published in 1932 stated that “the almost frantic attempts to put into the medical course teaching in all phases of scientific and medical knowledge, and the tenacity with which traditional features of teaching are retained have been responsible for great rigidity, overcrowding, and a lack of proper balance in the training.”2 Since then, periodic concerns have been raised about this ongoing problem, notably in the Association of American Medical Colleges' Panel on the General Professional Education of the Physician (GPEP report) in 1984 and the Assessing Change in Medical Education (ACME-TRI) report in 1993, among others.3-6
> The problem has only worsened in the years preceding the pandemic. New pharmaceutical agents seem to appear daily, while our understanding of pathophysiology of disease continues to expand. In addition, we have needed to consider new and vital subjects such as cultural competence, care of LGBTQ patients, teamwork and interprofessional care, health care quality and safety, medical humanities, narrative medicine, and even wellness curricula, that have added pressure to the preclerkship phase of medical school, as ever more content is added. And little has been removed.
The relationship between empathy and burn out (especially in clinical practice) is also very well established. The whole system has a very strong selection against empathetic practitioners, whether through burning them out, or selecting a preference for their less empathetic peers.
"Side note; does anyone actually think that chat gpt succeeds at rewriting things 'compassionately' or 'more intelligently'?"
Often, sure.
A few days ago I asked ChatGPT about a situation that needed compassion and wisdom. I thought it did way better than the 5 or 6 people I had talked to about it.
FYI, we're going to give the dog another chance.... but I was glad ChatGPT didn't push for that. I think ChatGPT answered it pretty close to how someone well trained in both mental health therapy and dealing with dogs with severe behavior issues might handle it.
Good luck with the dog, and thank you for sharing that. I agree that was a reasonable response but I may be biased because I recently had a conversation along these lines and the hit most of the same beats. It is certainly free of the defects I mentioned. Thank you so much for the counter example, its perfect.
I wonder if it is significant that you didn't ask it for a specific kind of response?
Likewise, thanks for scoring a point for humanity. :) It's rare to get someone on the internet to appreciate a counterpoint.
Yes it probably helps that I didn't ask it for advice, rather than just kind of talk to it as if it was a sounding board. I guess I'm weird that way, in that I feel it is important to talk to an AI like it is human and has feelings (saying everything from "please" to "wow that was amazing, thanks"), even if only for myself.
But I've become convinced it gets better responses. Which isn't altogether surprising, if you think about how LLMs work.
I just read the GPT snippet in the response link you gave, and while it superficially seems well written, the text is hardly what i'd call moving or good. Instead it tries just hard enough at covering all the formulaic points of empathy and sympathy to instead emerge without a human spark of either. Essentially, it reads more like a personalized distillation of a generic committee-made sympathy note.
If this is today's idea of a decent compassionate note, it's a sadly bland example of what some people consider meaningful.
Well, I guess the difference it that it didn't write it for you, it wrote it for me. And it was exactly what I needed to hear. And exactly not what humans that I talked about the situation to were providing, who tended to be all about "here's how I fixed the (supposedly) similar situation with my own dog". (which is unhelpful, given that I've had other dogs and this one is very, very different) For whatever reason, I guess they didn't think it was important to say that it is a heartbreaking situation, they didn't say "It's a tough situation, but your compassion and consideration for your dog's well-being is commendable." They didn't say "As much as we'd like to, we can't change the innate behaviors and instincts of certain breeds." They just skipped that part, like it didn't need to be said.
But that's what I needed to hear. I didn't need whatever "human spark" that you imagine (but that I personally doubt you can actually demonstrate).
So on that note, why don't you show me what you'd write, if you were trying to be helpful to someone in that situation? I'm genuinely interested in seeing if you can write something that isn't formulaic, that has that human spark you speak of, and that is otherwise better than the response that a machine gave me.
I get it if that's too much time and effort to do, but really, it shouldn't take a lot of time at all. At least, not if you are suggesting that a doctor in an ER should take a break from saving lives to do such a thing. So please, just write a paragraph or two that shows what you'd say if a friend came to you looking for empathy in such a situation.
"it's a sadly bland example of what some people consider meaningful."
Again, let's hear what you think is better, the ball is in your court. As it stands, based solely on my interaction with you as well as my above interaction with ChatGPT, it looks like machines do empathy better than humans. But I am open minded to the idea that you can actually do better than the machine. Let's hear it.
Just thought I'd post a quick reminder, I'd be very interested in seeing what you mean by the ChatGPT example being a "sadly bland example of what some people consider meaningful." I don't understand what that means, because you haven't shown me what kind of response you think would be better.
Would you care to write one? Feel free to do one quickly, such as something a busy ER doctor would be able to put together. Otherwise, though, it's very hard to accept the idea that ChatGPT did such a poor job without something to compare it to.
> does anyone actually think that chat gpt succeeds at rewriting things 'compassionately' or 'more intelligently'?
I've seen non-native English speakers (co-worker) use it to write lengthy apologies to customers, and I (native speaker) think it sounds over the top sincere and dramatic, especially when the same person tends to talk in a completely different manner.
I believe this story, because the customer in this case was satisfied too
"and I (native speaker) think it sounds over the top sincere and dramatic, especially when the same person tends to talk in a completely different manner."
Of course, you can actually tell ChatGPT to change its style and mannerisms. Here's an example when I was experimenting with this. It went a bit over the top in the other direction, but only because I was so explicit about asking to to be casual.
It's even pretty good at doing this multi-lingually. I live in Indonesia, and there is a sort of dialect used by various people from Jakarta that basically combines Indonesian and English in a way that is quite recognizeable when you hear it.
The other day I asked ChatGPT to answer a question I had in Indonesian, and the answer was very formal (Google Translate generally has the same problem – the translations it gives you are way too formal for most speech). So I asked it to rephrase in this Jakartan slang it and did very well.
No, but if the AI is good enough, you can say things like "write in a non-formal style" and it will do just that. Whether or not ChatGPT can do this well 6 months from its launch isn't the point.
Right now ChatGPT tends to write formally and cautiously, for rather obvious reasons. Better to err on that side than the other. With a combination of thoughtful prompting, and AIs getting better over time, most of the arguments against using them in such situations fade.
Remember, the original article is not talking about different languages, it is simply about a doctor using it to help out. Obviously the doctor can scan it prior to showing it to anyone.
> Side note; does anyone actually think that chat gpt succeeds at rewriting things 'compassionately' or 'more intelligently'?
Sure. There are some good examples in the replies already. For me, ChatGPT (specifically GPT-4, via API) does a better job than I do based on sheer breadth and command of vocabulary alone. I've used it a couple times to generate emotional texts and the results were quite good - though I'm not actually giving anyone the raw LLM output. The process always looked like:
1. Me describing in detail the kind of message I need, who the recipient is, and what the situational context is, and then asking ChatGPT to generate several variants.
2. LLM generating 2-3 variants, each of which is roughly 50% stellar, 50% meh.
3. Adding any missing details and trying again. And/or, if one of the variants looks particularly good/promising, asking the LLM to generate variations of that.
4. Taking the few best results, mixing them together, and blending with some of the text I independently wrote on my own.
In those few cases (and in more cases where I'd enlist GPT to write formal e-mails for me; the process is pretty much identical), the final outcome was a piece of text that's a sentence or phrase-level blend, with 50-60% of the phrases being AI-authored and the rest my own, and 100% of it edited and reviewed by me.
It still takes some work to write a good message this way, but I resort to this, because that 50% of AI-sourced text I incorporate is stellar - all well chosen words and phrases (many of them I kind of forgot they exist). It's something I could write completely on my own, but from my pre-GPT attempts I know that reaching this level would take me many hours of agonizing efforts, marked by strong feelings of self-doubt.
> Side note; does anyone actually think that chat gpt succeeds at rewriting things 'compassionately' or 'more intelligently'?
Yes, I use it for this purpose all the time. I could have done better myself if I invested time in writing it, but it allows me to get a good enough result in a couple of seconds.
> Satisfying HIPAA rules around patient privacy alone may take many years, or decades, to resolve themselves before we could even contemplate directly using programs like ChatGPT in a medical theater.
This is easily solved by not using someone else's API.
Here's $500 worth of hardware you can get on ebay running Vicuna-13B, locally:
> IV fluids may not be the best treatment for someone with severe pulmonary edema and respiratory distress because it could make their condition worse. This is because when a person has severe pulmonary edema, their lungs are already filled with fluid, which makes it harder for them to breathe. Adding more fluid to their body could increase the pressure in their lungs and make it even harder for them to breathe. This could be life-threatening.
> Instead, the best course of action would be to focus on treating the underlying cause of the pulmonary edema, such as heart failure or a lung infection. This may involve giving medications to help the heart pump more effectively or to reduce inflammation in the lungs. The patient may also need oxygen therapy to help them breathe more easily.
> It's important to remember that every case is unique, and the best course of treatment will depend on the individual patient's condition. The healthcare team will do their best to provide the most appropriate care for the patient based on their specific needs.
Yes, but this is adding in treatments not listed in the original prompt. There was nothing mentioned about treating heart failure or reducing immflamuation. Vicuna is extrapolating here and not simplifying and making the message more compassionate. It is an interesting application of local inference though.
I run a similar Vicuna bot on a free ARM VPS from Oracle. Inference takes ~20 seconds for ~200 tokens on the CPU, so it should stream results about half as fast as ChatGPT.
...however, that was on cheapo ARM hardware. On a hospital budget I bet you could beat OpenAI with a local model pre-mapped in GPU memory on a 3090.
Not OP, and I'd have to see the prompt to run it on my set up, but I but together a refurbished HP Chromebox with some extra RAM and an SSD for a total of $250 and it'll run all the 13B models about as fast as I can type on my phone, just as a reference point. I'm consistently impressed by it even if it's not blazing fast.
ML models in general are not great when tasked with rigid stuff. Stable diffusion can make a cool picture, but getting it to make exactly the picture you had in mind is mostly impossible.
This is actually a great use case in the sense that the requirements are flexible, it just has to be something empathetic and directionally related. That bar is so much lower than a trustworthy diagnosis or even a document lookup, either of which can be right or wrong. When what you need is "an example of x", this technology is great.
> Kind of fascinating that the benefit here is not so much artificial intelligence as artificial empathy.
In their book "The Elephant in the Brain", Kevin Simler and Robin Hanson argue that a large subset of health care is already something like that: conspicuous caring with little benefits other than social signaling and feeling that one's being care for. [1]
See my sibling comment just posted, I don't belive that's relevant. You can always screw up facts, it's a high bar. A loosely constrained example of an empathetic speech much easier. Despite the fact that medicine deals in facts (and this would hold true for an even more black and white discipline) so all you need in principle to learn from are a few examples, while empathy is much more subtle and would require a representation to be learned from many varied examples.
And experience is the practical contact with and observation of facts, over an extended period.
Most English dictionaries explicitly separate out “formal judgment by an expert” as a distinct meaning for “opinion” from the more common meaning of personal subjective no-facts-needed belief.
>> I printed this response up, and read it to the concerned family. As I recited ChatGPT’s words, their agitated expressions immediately melted into calm agreeability.
not bad for a corny $0.5 hardback novel - but wtf was that about? If a Doctor read that script about my mother in ER, I'd be more insulated than had they printed the source code of a Linux Kernel module and proceeded to read it out in full, verbatim!
So what one's a Doctor or another's a Lawyer and even involved in tech VC world? Artificial Intelligence is extremely complex and specialized field, plenty of learned professionals and VCs have been mesmerized then duped by "altruistic" tech projects in the filed of AI or Crypto.
In the 80 and 90s automation tech in call centers was supposed to revolutionize customer support blah blah blah ... didn't work, people hate automation/robotic behavior, so the Industry switched to outsourcing calls to human operators in countries with cheaper salaries - it worked - and that drove other outsourcing innovations.
I fear, chat bots and generative AI will fair the same - they're barking at the wrong tree.
Yep, I think fundamentally it feels wrong. I think many families would see it as a sign of incompetence if the doctor cannot explain a diagnosis or treatment plan - time pressure is part of the job unfortunately, and people may not process difficult information in stressful situations with as much clarity, you have to allow for those things. Reading a script from ChatGPT is insulting.
Patients hate when doctors just google their symptoms / rash in front of them, this is quite a step further. Printing out supporting information for families / patients is ok, but it needs to be validated – I think this is quite a dangerous use of generative AI for both the doctor and the patient.
One of my favourite movies; we need more finance movies considering the outsized direct/indirect impact financial industry has on our lives + hits closer to home too.
My takeaway isn't how useful ChatGPT is, but how useless the doctor is, and by extension, how poorly the the field of medicine is that it doesn't adequately train doctors to have better bedside manner.
Regardless of my state of agitation, I would most definitely not want to be read a script prepared by an AI. In fact, were I to find out after the fact, I'd be even more incensed.
I passed this article & discussion on to physician spouse, who would specifically like me to reply to this comment: "You are being very emotional."
Digging deeper (as the comment above alone isn't worthy of HN, IMO): "People believe what they want to believe because of small sample sizes. When there isn't a black & white answer people go back to their personal experience, which is limited. They will not be right in the way you want to be right if you're practicing medicine." The family members are certainly exhibiting this. Are any of us, in this discussion?
Additional question: "Who is the patient? Is it the anxious family members who are being treated, or the person with fluid in the lungs?" The person with fluid in the lungs is the patient and the doc needs to be spending time on that person (the one who will die if not treated correctly). The family members are important but not in need of medical care. They can be handed off to an AI in the short term.
From my own vantage point, I've observed that the context switching between "caring for patient" and "dealing with family" can put a substantial drag on cognitive function for physicians. They do need to suck it up and deal with it since family members are the primary way in which patients receive post-release care in the US due to the non-existent safety net, and family member presence can really improve outcomes, but I'm not surprised that ChatGPT helps in producing the sort of bland prose that is well-received by family members.
This sort of response certainly seems indicative of a doctor with horrifically bad bedside manner. Families tend to be emotional when the health of their loved ones is at stake. I'm skeptical that a chatbot would help someone who can't grasp that.
The response is pretty clear and 100% correct: family is important but not in need of medical care. Making doctors focus too much attention on the needs of third parties will worsen the outcomes for actual patients.
Doctors are, on average, quite empathetic people. But humans in general are what they are, and the politics and economics of medicine prevent the doctors from saying the right thing[0] to overbearing or emotionally distraught third parties, so I can easily see how inserting a chatbot in between would improve the outcomes of actual patients.
(Though I am surprised there isn't dedicated staff for that yet. Or does it not work when people know they're talking to a PR doctor instead of the one treating their loved one?)
There is something comforting about how ChatGPT never becomes overly emotional. I'm not surprised it outperforms humans in bedside manner except for occasions where the human is particularly adept.
Will there be a Dr House (is there a more current TV doctor) video version which the family can interact with which would make them feel like they are getting the best care in the world from a famous doctor?
But also, if ChatGPT has a 50% chance of giving the wrong diagnosis, can you trust it more than 50% to give the correct reasoning to the family as to the activity of the medical staff?
"Trust but verify". If you can't verify, then don't trust.
I've used ChatGPT for short scripts. I can read and test-run scripts to verify their correctness. On the other hand, asking it to do more complex programming is error-prone.
I agree. I've never liked the phrase either. Even so, I'll take "Trust but verify" as a shorthand for "Accept the output of a somewhat untrustworthy person/process, but verify the output before making use of it."
Sometimes a short, inaccurate phrase wins out over the longer, accurate version.
There are bad and good uses of that phrase. I like the reading that goes "trust the intentions and skills of the other person, but recognize they're only human and can make mistakes, so if the thing is high-stakes, double-check anyway".
> In each case, the output from the hungover intern/ChatGPT needs to be carefully checked before it’s used. But in these scenarios, reviewing existing work is usually much faster than starting from scratch.
There are different figures, partly as a result of how this question is studied. The Mayo Clinic did a famous study on people who sought a second opinion, and they found that something like 12% of people seeking that second opinion were correctly diagnosed. Obviously this isn't a fair sample since this is not only people seeking a second opinion, but one from a prestigious source.
Isn't seeking a second opinion something you usually do when the first opinion sounds fishy, doesn't seem to match the evidence, or when the doctor giving doesn't seem to be doing his job seriously (e.g. dismissive, preoccupied etc)? Or in cases where it is known to be a difficult situation which makes you think multiple diagnoses are needed?
My experience is second-hand, but they seem to be pretty split between seeing exciting rare diagnoses everywhere (students, juniors primarily, or one's own particular research interest) and biased against them, it's ~never that, it's the same boring thing we saw ten of yesterday (jaded specialists primarily).
Physician here with ER experience (and IT too). I think it's quite realistic, except the relatives being in the ER room, giving orders to nurses. But that might be different in other hospitals and countries.
The reason for the edema is not given, perhaps cardiac insufficience, perhaps she didn’t take her pills (it is mentioned that she had dementia). Anyway, it’s possible that treatment has already been done but would take a few hours to fully set in, with the relatives impatiently waiting and seeing no improvement.
Explaining medical things to patients and relatives is difficult and a subject on its own in medschool. In extraordinary circumstances and without being able to do much on their own, people filter things they are told to what they want to hear. Sometimes you get feedback on what the patient understood, it's surprising and also the potential base of a lawsuit. There is a reason that you have to document so many things in writing. So I think it’s a smart idea to use AI to generate simple-language messages that are easily understood.
It doesn’t matter if the medical content of the text is 100% correct, the message here is: ”Calm down, stand back, I handle this”.
I agree it's a realistic scenario. I always thought one of the most important things about being a junior doctor (FY1 / 2 in UK) was to be able to have these conversations with next of kin in a way that is empathic and easy for them to follow. If they are not processing the information then you have to allow for that, but I do think some relatives would react very badly if given a printout from an AI to explain a situation, patients always bring up if a doctor has googled their symptoms / rash in front of them, this is giving much more authority to the AI in a more sensitive and potentially difficult situation.
I think the way AI is used in medicine has to be thought about very deeply, I don't think it should just be a fallback in time-pressured or difficult scenarios.
+1 Wife is an ER doc. She also agrees with the premise management that communicating medical facts to that patient and family is burdensome and time consuming, especially when there's pressure on the staff to see a high volume of patients. She has her own anecdotes similar to the story. There needs to be improvements in communication though she hasn't bought into ChatGPT being the right way to do this.
Theoretically something like BioMedLM and/or a vector search over clinical research and/or GPT-4 with cited sources etc. could be useful in some diagnostic circumstances to at least give doctors ideas, right?
What he is talking about here is just an incredibly common scenario that the doctor and everyone is already very familiar with. Which is basically delaying the inevitable for people who are in some ways already gone.
Also my understanding is that the ChatGPT API does have some level of privacy now although maybe not compliant with the privacy laws which seems somewhat burdensome on the entire medical industry (overcompensating in some detrimental ways).
It seems like the fundamental problem is aging. Maybe a leading edge LLM approach can help keep people up to speed on aging research. Although probably that's pushing it because you would need to get very deep into mitochondrial damage or whatever.
> Theoretically something like BioMedLM and/or a vector search over clinical research and/or GPT-4 with cited sources etc. could be useful in some diagnostic circumstances to at least give doctors ideas, right?
Anything is possible, but consider this doctor's review of ChatGPT's diagonistic output:
> ChatGPT works fairly well as a diagnostic assistant — but only if you feed it perfect information, and the actual patient has a classic presentation, which is rarely the case. (And you don’t mind a 50% success rate that often misses life-threatening conditions.)
For any new technology to be useful, it needs to do more than just work; it must be significantly more productive than the alternative.
That review is one anecdote which does not specify the ChatGPT model used (which vary widely in performance), is not BioMedLM, is not a vector search, etc.
Is there a name for this 5 paragraph response format, that feels so intrinsic to chatGPT? (5 paragraphs: General empathy, specific, empathy, technical explanation, establishing trust, reassurance)
Is the style a result of the default rules that OpenAI is using? Or is it derived automatically from the material chatGPT was trained on?
Yes, it's called the five-paragraph essay [0] :) Or at least, it's a variant of it, with more specific constraints on some of the paragraphs (e.g. "establishing empathy" in the introduction).
The point here is that he is using chatGPT /LLMs as an assistant - not making a diagnosis or something. He easily could have come up with a similar script from working with another member of hospital staff (maybe a social worker??) but that takes time.
What I want (in a business context) is a LLM that can rewrite memos / docs (I am an Amazonian) in exactly the style that someone expects and is trained on other successful docs. It's not to cut out the hard work of getting the details right - its to cut down on the bullshit of misalignment from bad communication.
Could the explanation be simply that, when the doctor came back with the piece of paper, the family thought it was from a textbook, or a second-opinion from another doctor that had been printed out? I.e. that it carried more weight because of that.
> ...only if you feed it perfect information, and the actual patient has a classic presentation, which is rarely the case. (And you don’t mind a 50% success rate that often misses life-threatening conditions.)
You should try asking it detailed questions about medicine. It’s fairly deep and broad in its medical “knowledge.” But it’s not good at deductive or inductive reasoning and has no agency, so is entirely unsurprising it’s not good at differential diagnosis. We actually have good differential diagnosis expert systems the challenge is getting the providers to input queries properly. I can imagine GPT4 acting as an intermediary between human natural language and expert systems to great effect. It seems rather rash to be judging these technologies based off a crappy web UI thrown on a chatbot a few months ago.
Providers don't input queries "properly" because differential diagnosis software is complicated, clunky, and not easy to use for the overworked clinician.
Given that "The Pile" includes PubMed and NIH as data sources it would be unlikely to have GPT4 not use them at all. Even GPT3 uses Wikipedia which does have (mostly) factual data with cited sources.
> Even GPT3 uses Wikipedia which does have (mostly) factual data with cited sources.
There's a LOT of stuff on Wikipedia where the source is a link to some random, long article and it's unclear where exactly the referred to information is coming from. Gets significantly worse for any "hot" topic.
It's lossy though, it tries to remember everything and relate that to everything else. Performance would increase significantly if it were tuned for that use case.
Then you haven’t had much experience with doctors… you go in and you’re most likely getting either a cast, a broad spectrum antibiotic, or an antidepressant. The only things that get specific diagnoses reliably is when somebody finds a tumor.
In the last year I had to get a full antiparasite treatment course because a doctor misdiagnosed my partners psoriasis… that was fun.
True, 90% of doctor diagnosis are garbage and maybe even worse than ChatGPT. The 10% is for when you have a visible problem that can be cured with a surgery, they are pretty good at that.
A doctor with GPT-4 could be way better than one without.
They already have diagnosis flowcharts where it’s a game it follow the symptom. It would have to be considerably better than the tools they already have to be appropriate.
You are just in wishful denial about the technological disruption: ChatGPT will rapidly and dramatically increase productivity by displacing human workers.
ChatGPT will automate 99.9% of what's on the Internet, which is blithering, displacing what most human labor is spent on these days, and leaving humans with no demand for their labor but going back to producing accurate, valuable knowledge.
The halcyon days are over. Post-truth technology has followed the typical maturation cycle, and now is at the point that we can automate it. (HN is dead. This post could be written by ChatGPT or a future LLM.)
The first: with his doctor hat on, he should understand that the value is the piece of paper and reading from it consistently to relatives who may have mild cognitive impairment.
ChatGPT wrote essentially identical content to the words he'd been delivering himself.
The professionally responsible thing to do would have been to type up or get someone to type up his _actual words_, print those out, and sign his name to them.
He is -- for marketing hype cycle reasons -- projecting magic onto AI and ChatGPT when anyone who has ever known anyone close with mild cognitive impairment knows that the consistent, calmly repeated message is the magic.
This is really irresponsible at worst, and empty, AI-hopey-changey nonsense at best.
The second: everyone here suggesting that this is a valid or appropriate or even appealing use of AI.
Look: if you can't do empathy (and not all of us can), hire an actual person who can do empathy! Make that their job, support them in it, celebrate their ability to do it and reward them properly. They are worthwhile people do who important jobs.
Don't give the job to an AI to produce a fake veneer of empathy.
If you can't understand why faking and automating empathy is not a substitute for actually trying to find real empathy in a role where someone could be an effective emotional bridge between two worlds, and if you can't see that this kind of bland/BS/automated sincerity application of AI is really toxic, I don't know what to say to you.
the flip side of this is that the model acts as a universal translator. not just for language but also for emotional valence and context.
are you a doctor? worked in an er? there are a lot of strong words about what doctors should and shouldn’t do, with regard to empathy and compassion. there is a shocking lack of compassion or empathy in the very words demanding them.
(there is also a rather large assumption that the doctor would individually be able to hire someone on a whim, to talk nice to people. in an er?)
the value of the paper, of the words, is that it helped bridge a recognized communication divide. the purpose? a combination of compassion and treatment. help the well-meaning elderly children understand well enough to keep them from indirectly killing their mother.
No, I'm not. My dad -- who died with dementia -- was in medicine for his entire life. I am writing about what people with cognitive impairment tend to need from the perspective of reassuring, daily, a highly intelligent person who couldn't remember stuff.
> there are a lot of strong words about what doctors should and shouldn’t do, with regard to empathy and compassion. there is a shocking lack of compassion or empathy in the very words demanding them.
The bit about hiring someone with empathy was quite clearly in my second point, which was aimed at the HN audience, not at the doctor.
(Though the guy who wrote the article isn't writing as a doctor in some medical publication, he is writing an executive at VC firm that backs AI startups, as other commenters have observed.)
I have a lot of compassion for doctors. I don't have a lot of compassion or empathy for motivated reasoning in AI marketing bullshit written by well-remunerated people who are trying to force a technology into healthcare where it does not belong.
And yes, the value of the piece of paper is that it bridged the divide. That's exactly what I said. It does it through consistency and constancy. ChatGPT has _no_ magical impact on that.
> The bit about hiring someone with empathy was quite clearly in my second point, which was aimed at the HN audience, not at the doctor.
then the assumption of ability to hire or impact hiring is broader.
the article author is quite biased, yes. part of the problem with current “ai” - bias from all sides.
chatgpt is not magical. it is a tool. a novel one. that many wish to sell. none of that lessens the possibilities it presents. nor the risks. but there are real, beneficial use cases. with a little time spent.
not everyone wishes to put greed ahead of linear algebra, or math before compassion.
> then the assumption of ability to hire or impact hiring is broader.
It is, but so what?
What I am suggesting is that using AI tools to fake empathy is worse than not being able to do empathy yourself. This should be evident.
If you have a task that requires significant empathy (a patient advocate, a customer advocate, a serious complaints line, anything with extensive exposure to the emotions of your customer) and you feel unable to do that (not everyone can), the ethical, responsible thing to do is to hire someone who does that job properly.
Professionalise caring. Don't just give the role to an AI that can fake it with a simulacrum of past human caring that it read on the (largely corporate) internet.
We are heading down a dark path if we pretend that automated fake empathy is acceptable.
Edit: and yes, people fake empathy all the time, they copy and paste it. Of course they do. But each time someone fakes empathy there is a glimmer of a chance that they will realise that it's a bad, inappropriate thing to do. There's a glimmer of a chance that they go back to their boss and say "you know, I feel bad brushing them off; what they are going through is real and awful and we should address this more personally".
AI empathy will never do that. And we should feel very, very nervous about using AI to deliver highly bespoke fake empathy to eliminate any guilt from that.
> What I am suggesting is that using AI tools to fake empathy is worse than not being able to do empathy yourself. This should be evident.
you miss the grain of truth in the advertisement.
in a high stress situation, someone may know how to describe the necessary words but be incapable of accessing them. this is not for lack of empathy. nor are the generated words some form of fake empathy.
is the right choice to read the output like a script to an audience? maybe not. but reading it to oneself could do wonders.
there are a lot of new possibilities, as long as the tool isn’t treated as magic or abused.
He’s not using it to treat anyone. He’s using it for generating text that then is (weirdly) read to patients.
These ads for chatgpt are getting incredibly hilarious as of recently. But an improvement over the initial “chatgpt saved my dog’s life” type of nonsense.
> But in these scenarios, reviewing existing work is usually much faster than starting from scratch.
This is true, but it’s also a different skill than writing is. There are people who write well enough but are terrible editors. I fear for them in the generative AI workplace.
I don't know if it's fake, but I would not bet on it. There is imho a large crisis of trust in all types of institutions, professions etc. Being a doctor has been demystified to an extent, for various reasons (e.g. the realization I've heard from some people that doctors didn't really know what was wrong and they just went from one guess to another, large amount of information being available giving people the wrong impression they know more than they do etc. etc.). It's also my personal explanation for the rise in alternative medicine, homeopathy, ivermectin treatments for covid, but also more broadly various forms of conspiracy theories. It's not far fetched that some people would trust "AI" more than a doctor who they don't trust or respect at all anymore.
So fake or not, it definitely hits a nerve as something that is possible. On a tangent, I think this is actually quite interesting. If I was a sociologist, I would be interested in the root cause of this as much as a scientist or an engineer is interested in a bridge falling down. But I mostly see lamenting that the world has gone crazy.
It is kind of counterintuitive that IV fluids are bad for the lungs somehow. You can’t really blame the family. Specially when the “bro” doctor decides it’s time for ChatGPT.
As someone that knows several doctors very well (decades of friendship, weekly coffee or drinks), you are really over estimating the average patient doctor outside of basically surgeons.
A doctor might have everything you describe. But they don't have the time, energy, or desire to spend much time discussing things with patients.
Similar to a car mechanic. They know a lot. And if it's their car or their father's question, they can probably walk you through rebuilding an engine over the phone. But try getting a real answer out of one during a visit.
The power of some of these AI bots is that you can query them to your hearts desire until you are really satisfied.
ChatGPT: "using empathy, tell the family why an iv is bad for someone with fluids in their lungs, keep it short and concise"
"I understand your concerns. It's tough to see a loved one struggling. When there's fluid in the lungs, adding more fluid to the body via IV can make the problem worse. It's like adding more water to a cup that's already overflowing. The body needs to remove excess fluid, not add more. Your loved one's healthcare team is making decisions to help, not harm. Please keep asking questions – it's important to understand the process. We're all here with the same goal: to ensure the best possible care for your family member."
If an ER doctor printed an AI-generated script and read it to me while I was in an extremely stressful situation I would punch him in the head. This is disgusting.
tl;dr: I use it for religious effect. The masses view AI-generated text as magic and accept it as authoritative over what I tell them, so I have ChatGPT rephrase what I was going to tell them and have it read "What the AI said". They totally buy it and leave me alone.
This is the most “that didn’t happen” story I’ve seen in a while. No way the concerned family responded positively to the doctor reading a script to them off a clipboard, forgot about it, and then responded positively to other nurses/doctors reading the same script to them again.
Yes, it is unfortunate and difficult and I did not realize reading a prepared script off a clipboard would have helped me get my point across better. I could only see it causing them to view me as stonewalling them or cause them further confusion.
I may have to try this for myself though, I am not a doctor and I haven’t tried it before.
You've clearly never worked in sales. Sometimes you simply can't make progress and another person comes in saying the exact same thing and the person goes "oh ok, I understand that. I'll do it."
I can't wait to go to the doctor, they ask me my favorite TV show/movie/video game, and then I get my diagnosis explained to me within the context of that show/movie/game. I could ask GPT to "doctor up" the explanation, and ChatGPT gives me a little more technical/medical jargon. Or just straight cascades of analogy.
"Understandably concerned, they hovered around my staff and me, to the point where their constant barrage of requests was actually slowing down treatment. To compound the problem, they were absolutely certain what their mother needed."
This is diagnostics. Doctors do it and so do IT bods (and others) but our patients (computers) dying is a bit of a fiddle rather than a tragedy or catastrophe.
I would not dream of letting CGPT loose on "indications/conta-indications" which is how medics work (IT could learn a lesson or two here instead of the usual "magic" thinking).
CGPT has a learned data set and it is not perfect. It will spit out a series of words given an input. No one can really calculate the output given an input - its non deterministic. It does look like the output is quite intelligent.
I consider CGPT as a really fancy calculator with all the usual caveats: do check the output carefully.
I know some doctors are full of baloney but c'mon. I really hope maybe that we create better doctors that have a lot of empathy, and are able to wield technology to make themselves 10x better like we're doing with programming. But, this is not the way.
Edit: I realized its fake...I think, it has to be. It doesn't make sense, like others mention. You're not reading a 5 para speech to a patient's family while they are being treated. And also, in the article, they say to double check and verify what is being said. You better proofread that too! Or it might invent something absolutely crazy, and you may be liable.