Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The nature of how LLMs hallucinate

oh here we go. You're one of those people conveniently restricting this accusation to a machine that scores a 130 IQ (https://www.reddit.com/r/singularity/comments/11t5bhh/i_just...), instead of also including humans, who notably will send someone to prison 1000% sure that they witnessed that person doing the thing, when in fact, later DNA evidence exonerates them (https://innocenceproject.org/dna-exonerations-in-the-united-...). Fucking LOL. Get out of here, doomer, the rest of us have AI-enhanced work to do.



I miss when this forum wasn’t so vitriolic.

Of course humans also hallucinate, but we didn’t have to take that into account every single time we read a piece of information on the internet. Humans have well-documented cognitive biases. Also, usually, a human’s attempt to deceive has some motivation. With LLMs, the most basic of information they provide could be totally false.


I don't dispute this. What I chafe at is the default-dismissive attitude about any utility of these. "It emits inaccuracies, therefore useless" would invalidate literally every human.

That said, overall utility of anything plummets drastically as reliability goes below 100%. If a particular texting app or service only successfully sent 90% of your messages, or a person you depended on only answered 90% of your calls, you'd probably stop relying on those things.

(I wish I could edit out my vitriol.)


Those are both excellent points. And I know I’m guilty of being somewhat anti-LLM just because it’s the new hotness and I’m kind of a contrarian by nature. Which is an example of bias right there! And being in academia when it blew up - I do worry about our future cohorts of computer scientists jf academia doesn’t adapt. Which it almost surely won’t. But that’s not a problem inherent to LLMs.


Did you fully read the post you blew up at? I didn’t doubt the usefulness of LLMs. It was a very specific complaint about posting LLM generated content on the Internet without specifying it as the off-the-cuff trash it usually is.


You can have a high IQ while having wrong axioms or bad facts.

Those humans who recalled incorrectly could have a 130 IQ, proving my point above and making your ad hominem reddit speak luddite insult fall flat.


My argument was that this criticism was only being applied to LLM's and not to other humans.

It is invalid if you take what an LLM says as simply what another human (who happens to have a broad knowledge reach) would say.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: