Hacker News new | past | comments | ask | show | jobs | submit login

So nothing is a hallucination ever, because anything a LLM ever spits out is somehow somewhere in the training data?





Technically it's the other way around. All LLMs do is hallucinate based on the training data + prompt. They're "dream machines". Sometimes those "dreams" might be useful (close to what the user asked for/wanted). Oftentimes they're not.

> to quote karpathy: "I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines."

https://nicholas.carlini.com/writing/2025/forecasting-ai-202... (click the button to see the study then scroll down to the hallucinations heading)


No. That’s not correct. Hallucination is a pretty accurate way to describe these things.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: