At the risk of sounding impolite or critical of your personal choices: this, right here, is the problem!
You don’t understand how medicine works, at any level.
Yet you turn to a machine for advice, and take it at face value.
I say these things confidently, because I do understand medicine well enough to not to seek my own answers. Recently I went to a doctor for a serious condition and every notion I had was wrong. Provably wrong!
I see the same behaviour in junior developers that simply copy-paste in whatever they see in StackOverflow or whatever they got out of ChatGPT with a terrible prompt, no context, and no understanding on their part of the suitability of the answer.
This is why I and many others still consider AIs mostly useless. The human in the loop is still the critical element. Replace the human with someone that thinks that powdered rhino horn will give them erections, and the utility of the AI drops to near zero. Worse, it can multiply bad tendencies and bad ideas.
I’m sure someone somewhere is asking DeepSeek how best to get endangered animals parts on the black market.
No. Where do you read that I take it at face value? I literally said that I expect Claude to give me "half-assed" medical guidance. I merely said that that is still better than having no clue for the next 2 hours while I wait on the phone with 35 people in front of me, which is completely different from "taking medicine advice at face value". It's not like I will let my wife drink bleach just because Claude told me to. But if it tells me that it's likely an ear infection then at least I can discuss the possibility with the doctor.
So I am curious about how TCM works. So what if an LLM hallucinates there? I am not writing papers on TCM or advising governments on TCM policy. I still follow the doctor's instructions at the end of the day.
For anything really critical I already double check with professionals. As you said, human in the loop is important. But needing human in the loop does not make it useless.
You are letting perfect be the enemy of good. A half-assed tax advice with some hallucinations from an LLM is still useful, because it will prime me with a basic mental model. When I later double check the whole thing with a professional, I will already know what questions to ask and what direction I need to explore, which saves time and money compared to going in with a blank slate.
The other day I had Claude advice me on how to write a letter to a judge to fight a traffic fine. We discuss what arguments to make, from what perspective a judge will see things, and thus what I should plead for. The traffic fine is a few hundred euros: a significant amount, but barely an hour worth of a real lawyer's fee. It makes absolutely no sense to hire a real lawyer here. If this fails, the worst thing that can happen is that I won't get my traffic fine reimbursed.
There is absolutely nothing wrong with using LLMs when you know their limits and how to mitigate them.
So what if every notion you learned about medicine from LLMs is wrong? You learn why they're wrong, then next time you prompt/double check better, until you learn how to use it for that field in the least hallucinationatory way. Your experience also doesn't match mine: the advice I get usually contains useful elements that I then discuss with doctors. Plus, doctors can make mistakes too, and they can fail to consider some things. Twitter is full of stories about doctors who failed to diagnose something but ChatGPT got it right.
Stop letting perfect be the enemy of good. Occasionally needing human in the loop is completely fine.
To be fair though, humanity doesn't know how some medicines work at a fundamental level either. The method of action for Tylenol, lithium, and metformin, among others isn't fully understood.
True, but modern "western"[1] medicine is not about the specific chemicals used, or even knowing how they exactly work at a chemical level, but the process for identifying what does and what does not work. It's an "evidence based" science with with experiments designed to counter known biases such as the placebo effect. Much of what we consider modern medicine was developed before we were entirely sure that atoms actually existed!
[1] It isn't actually western, because it's also used in the east, middle-east, south, both sides of every divide, etc... In the same sense, there is no "western chemistry" as an alternative to "eastern alchemy". There's "things that work" versus "things that make you feel slightly better because they're mild narcotics or stimulants... at best."
(I don't want to focus too much on Chinese herbal medicine, because I see the same cargo-culting non-scientific thinking in code development too. I've lost count of the number of times I've seen an n-tier SPA monstrosity developed for something that needed a tiny monolithic web app, but mumble-mumble-best-mumble-practices.)
"Western medicine" (which is exactly what it is called in China, to contrast with TCM) is shorthand for "practices invented in the west". That these methods chase universal truths, or are practiced world-wide, do not make them "non-west" in terms of origin.
The Chinese call the practice of truth seeking, in a more broader sense (outside of medicine) just "science".
"Western" medicine is also not merely the practice of seeking universal medical truth. It is also a collection of paradigms that have been developed in its long history. Like all paradigms, there are limits and drawbacks: phenomena that do not fit well. Truth seeking tends to be done on established paradigms rather than completely new ones.
The "western" prefix is helpful in contrasting it with TCM, which has a completely different paradigm. Many Chinese, myself included, have the experience that there are all sorts of ailments that are not meaningfully solved by "western" medicine practitioners, but are meaningfully solved by TCM practitioners.
You don’t understand how medicine works, at any level.
Yet you turn to a machine for advice, and take it at face value.
I say these things confidently, because I do understand medicine well enough to not to seek my own answers. Recently I went to a doctor for a serious condition and every notion I had was wrong. Provably wrong!
I see the same behaviour in junior developers that simply copy-paste in whatever they see in StackOverflow or whatever they got out of ChatGPT with a terrible prompt, no context, and no understanding on their part of the suitability of the answer.
This is why I and many others still consider AIs mostly useless. The human in the loop is still the critical element. Replace the human with someone that thinks that powdered rhino horn will give them erections, and the utility of the AI drops to near zero. Worse, it can multiply bad tendencies and bad ideas.
I’m sure someone somewhere is asking DeepSeek how best to get endangered animals parts on the black market.