Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have not said there is no value in LLMs, quite the contrary.

What I'm warning is against thinking of them as independent agents with their own minds, because they don't work like that at all, so you'd be anthropomorphising them.

These models certainly have a compilation of knowledge, but it is statistical knowledge - in the same way as a book of logarithms has lots of mathematical knowledge, but you wouldn't say that the book 'knows logarithms'. The compilation contains statistical 'truths' about the topics on which it has been trained; and contrary to a written book, that knowledge can be used operationally to build new information.

Yet that static knowledge does not reach the point of having a will of its own; there is nothing in the content generation system that makes it take decisions or establish its own objectives from its statistical tables of compiled knowledge.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: