Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is precisely what psychologists call “confabulation“. This is what hallucinations in language models should be called. For the same thing is happening there: An answer is given that is factually wrong, but that is plausible and consistent with a prior action.

https://twitter.com/ylecun/status/1667272618825723909?s=46&t...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: