So, if GPT-4 starts dispensing authoritative sounding medical advice without being licensed to do so is that medical malpractice on the part of OpenAI?
Also: with people likely confused about the degree of authority these programs have and their ability to masquerade bullshit as something that is true I wonder if most people would be able to detect whether or not what ChatGPT says is wrong before it harms them.
I don't think you can disclaim your way out of this. Especially not if you end up causing harm and if you say things authoritatively. That makes a huge difference.
If they can add a disclaimer they can also say 'Sorry, ChatGPT-4 is not a doctor, consult your doctor'.
This of course would harm those who don't have access to doctors but even there you'd need to prove that it does more good than harm before you'd be allowed to put the output out there. Even the most innocuous medicine has to go through a standardized test before it is released to the public. Without proven efficacy it may well be a net negative.
Great. But without a license to practice, a concept of truth and a confidence score to go with the output I don't think it should be making any such statements at all.
That line of reasoning sorta would support the claim that webmd and medline and all those sites should just not be accessible to the general population. webmd symptom checker etc.
Also: with people likely confused about the degree of authority these programs have and their ability to masquerade bullshit as something that is true I wonder if most people would be able to detect whether or not what ChatGPT says is wrong before it harms them.