There are a lot of counterarguments I could bring up, but just of the top, plainly, just because people use LLMs as therapists, lawyers, doctors, deities, doesn't make LLMs such.
My personal believes (we should not rely on models for such things at this stage, let's not anthropomorphize, etc.) to one side, let me ask, do you think if I used my friend Steve, who is not a lawyer but sounds very convincingly like one, to advice me on a legal dispute, that should be covered by attorney client privilege?
Cause, even given the scenario that LLMs suddenly become perfectly reliable enough to verifiably carry out legal/medical/etc. services to a point where they can actually be accepted into day-to-day practice by actual professionals and the companies are willing to take on the financial risks of any malpractice for using their models in such areas (as part of enterprise offerings for an extra fee of course), that still wouldn't and shouldn't mean that your run-of-the-mill private ChatGPT instance has the same privileges or protections that we afford to e.g. patient data when handled digitally as part of medical practice. At best (again, I dislike anthropomorphizing models, but it is easier to talk about such a scenario this way), a hypothetical ChatGPT that provides 100% accurate legal information would be akin to a private person who just happens to know a lot about the law, but never got accredited and does not have the same responsibilities.
Again though, we are far from that hypothetical anyways, "people" using LLMs that way does not change this fact. I know, unfortunately, there are people who are convinced that current day LLMs have already attain Godhood and are merely biding their time and that doesn't become real either, just because they act according to their assumptions.
I really struggle to understand, nor do I see any cogent arguments across this comment section why current day LLMs in such a scenario should be treated differently to e.g. a PKM software or cloud hosted diary and afforded the same legal protections (or lack thereof depending on viewpoint, personal stance and your local data privacy laws).
You'll find these laws privileging certain folks are contoured and controlled by the individuals who have already been granted such privilege to discourage and limit competition. Not because it's good in any way for the client.
Protectionism hurts all of society to benefit a few.
Perhaps this is a language barrier, but I genuinely do not understand what is meant by this. Like, what does this have to do with protectionism, who are the "folks" in this case, etc. Honestly asking.
Doctors control who can be a doctor, what is required to be a doctor, what doctors can and can't do, and that people are forced to go to them for Healthcare ... all to protect their personal income. Not to better Healthcare. Not to expand access to Healthcare. But precisely to make it cost more to get. They are hurting society to benefit themselves.
Don't know where to start, but I want to assure you, no matter where on this planet you live, Medical Doctors are generally not at fault for high costs of care. Depending on which health care system we are talking about, the particulars may be different, but no, MDs are not interested in worsening patient care for their own benefit. Kinda difficult considering the amount of uncompensated labor and stress compared to other higher paying occupations. Ask a trainee/resident/equivalent for your local health care system if you want some details.
And people are "forced" to go to an MD for medical treatment in the same way they are "forced" to go to any other domain specific expert, it is where the experience and liability lie because they have undertaken the time, training and exams to ideally assure a specific level of care.
Incidentally, has absolutely zero to do with LLMs and the fact that this is cloud hosted software, not an entity, being or anything of the sort, so shouldn't receive any special considerations beyond what we afford to cloud hosted content. Couldn't find anything on patient data processing in that MF collection linked and as that was his area of work, was purely US centric. Medical care is however the purview of medical professionals outside the US as well, including in countries with far higher patient outcomes. If there is an applicable argument, just quote that directly over linking a collection of clips.
To bring this back to the topic at hand, LLMs can and are being used in Medical Practice already. And neither did Doctors prevent that, nor did that require a law change, because, as stated before, it is merely data being input and processed. There are EU MDRd apps for skin cancer, there are on-prem LLM solutions that adhere to existing patient privacy regulations, etc.
Basically, Doctors do not stand in the way of LLM usage (neither could they, nor do they have the time) and even if they wanted to, LLM input and output is just data and gets treated accordingly.
I can represent myself in court, but I can't prescribe my own medication. If one does not go to the doctor to get those drugs they'll die, so yes: forced.
All you assured me of is that you didn't watch the video.
My personal believes (we should not rely on models for such things at this stage, let's not anthropomorphize, etc.) to one side, let me ask, do you think if I used my friend Steve, who is not a lawyer but sounds very convincingly like one, to advice me on a legal dispute, that should be covered by attorney client privilege?
Cause, even given the scenario that LLMs suddenly become perfectly reliable enough to verifiably carry out legal/medical/etc. services to a point where they can actually be accepted into day-to-day practice by actual professionals and the companies are willing to take on the financial risks of any malpractice for using their models in such areas (as part of enterprise offerings for an extra fee of course), that still wouldn't and shouldn't mean that your run-of-the-mill private ChatGPT instance has the same privileges or protections that we afford to e.g. patient data when handled digitally as part of medical practice. At best (again, I dislike anthropomorphizing models, but it is easier to talk about such a scenario this way), a hypothetical ChatGPT that provides 100% accurate legal information would be akin to a private person who just happens to know a lot about the law, but never got accredited and does not have the same responsibilities.
Again though, we are far from that hypothetical anyways, "people" using LLMs that way does not change this fact. I know, unfortunately, there are people who are convinced that current day LLMs have already attain Godhood and are merely biding their time and that doesn't become real either, just because they act according to their assumptions.
I really struggle to understand, nor do I see any cogent arguments across this comment section why current day LLMs in such a scenario should be treated differently to e.g. a PKM software or cloud hosted diary and afforded the same legal protections (or lack thereof depending on viewpoint, personal stance and your local data privacy laws).