Hacker News new | past | comments | ask | show | jobs | submit login

ChatGPT is just statistically associating what it’s observed online. I wouldn’t take dietary advice from the mean output of Reddit with more trust than an expert.



Doctors can be associating what they’ve learned, often with heavy biases from hypochondriacs and not enough time per patient to really consider the options.

I’ve had multiple friends get seriously ill before a doctor took their symptoms seriously, and this is a country with decent healthcare by all accounts.

Human biases are bad too.


> Doctors can be associating what they’ve learned, often with heavy biases from hypochondriacs

So true. And it's hard to question a doctor's advice, because of their aura of authority, whereas it's easy to do further validation of an LLMs diagnosis.

I had to change doctor recently when moving towns. It was only when chancing on a good doctor that I realised how bad my old doctor was - a nice guy but cruising to retirement. And my experience with cardiologists has been the same.

Happy to get medical advice from an LLM though I'd certainly want prescriptions and action plans vetted by a human.


    > It was only when chancing on a good doctor that I realised how bad my old doctor was
How did you determine the new doctor is "good"?


By the time a doctor paid me enough attention to realise something was wrong I had suffered a spinal cord injury whose damage can never be reversed. I’m not falling all over myself to trust chatgpt, but I got practically zero for doctors either. Nobody moved until I threatened to start sueing.


I sometimes use ChatGPT to prepare for a doctor's visit so I can have a more intelligent conversation even if I may have more trust overall in my doctor than in AI.


Will be cool once we have active agents tho. Surely the learning/research process isn't that difficult even for current LLMs/similar architectures. If it can teach itself, or it can collate new (never seen) data for other models then that's the cool part.


You realize that "online" doesn't just mean Reddit, but also Wikipedia and arXiv and PubMed and other sources perused by actual experts? ChatGPT read more academic publications in any field than any human.


Yes, but because ChatGPT doesn’t think, it doesn’t know which arxiv papers are absolute garbage and which ones are legit.

Wikipedia does not have dietary advice. It’s an encyclopedia.


I’ve seen so many doctors advertising or recommending homeopathic “medicines” or GE-132 [1], that I would be fairly more confident in an LLM + my own verification from reliable sources. I’m no doctor, but I know more than enough to recognize bullshit, so I wouldn’t just recommend this approach to everyone.

[1] https://pubmed.ncbi.nlm.nih.gov/1726409/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: