Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Two key things here to realize.

People also often don't understand things and have trouble separating fact from fiction. By logic only one religion or no religion is true. Consequently also by logic most religions in the world where their followers believe the religion to be true are hallucinating.

The second thing to realize that your argument doesn't really apply. Its in theory possible to create a stochastic parrot that can imitate to a degree of 100 percent the output of a human who truly understands things. It blurs the line of what is understanding.

One can even define true understanding as a stochastic parrot that generated text indistinguishable total understanding.



> People also often don't understand things and have trouble separating fact from fiction.

That's not the point being argued. Understanding, critical thinking, knowledge, common sense, etc. all these things exist on a spectrum - both in principle and certainly in humans. In fact, in any particular human there are different levels of competence across these dimensions.

What we are debating, is whether or not, an LLM can have understanding itself. One test is: can an LLM understand understanding? The human mind has come to the remarkable understanding that understanding itself is provisional and incomplete.


Of course it can. Simply ask the LLM about itself. chatGPT4 can answer.

In fact. That question is one of the more trivial questions it will most likely not hallucinate on.

The reason why I alluded to humans here is because I'm saying we are setting the bar too high. It's like everyone is saying it hallucinates and therefore it can't understand anything. I'm saying that we hallucinate too and because of that LLMs can approach humans and human level understanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: