Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hallucinating is roughly how they work, we just label it as such when it's something obviously weird



This is something I'm not sure people understand.

LLM's only make a "best guess" for each next token. That's it. When it's wrong we call it a "hallucination" but really the entire thing was a "hallucination" to begin with.

This is also analogous to humans - who also "hallucinate" incorrect answers, usually "hallucinate" incorrect answers less when they "Think through this step by step before giving your answer", etc.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: