Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is well known that it is impossible to eliminate hallucination in LLMs

Here is a formal proof.

https://arxiv.org/abs/2401.11817

Hopefully this article helps spread the word, but this is something that will have to be mitigated within acceptable limits.

LLMs are NLP, not NLU.



> NLU

Natural Language Understanding

(I had to look it up. >_>)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: