Hacker News new | past | comments | ask | show | jobs | submit login

This sounds eerily like LLM hallucination:

--- In one of Gazzaniga’s experiments, researchers presented the word “walk” to a patient’s right brain only. The patient immediately responded to the request and stood up and started to leave the van in which the testing was taking place. When the patient’s left brain, which is responsible for language, was asked why he got up to walk, the interpreter came up with a plausible but completely incorrect explanation: “I’m going into the house to get a Coke.”

In another exercise, the word “laugh” was presented to the right brain and the patient complied. When asked why she was laughing, her left brain responded by cracking a joke: “You guys come up and test us each month. What a way to make a living!” Remember, the correct answer here would have been, “I got up because you asked me to,” and “I laughed because you asked me to,” but since the left brain didn’t have access to these requests, it made up an answer and believed it rather than saying, “I don’t know why I just did that.” ---




This is precisely what psychologists call “confabulation“. This is what hallucinations in language models should be called. For the same thing is happening there: An answer is given that is factually wrong, but that is plausible and consistent with a prior action.

https://twitter.com/ylecun/status/1667272618825723909?s=46&t...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: