Hacker News new | past | comments | ask | show | jobs | submit login

My impression is that these models are already doing far more than what the language production machinery in our brain does. We are able to produce language according to grammar and semantics, but we also have independent mental representations to guide the generation of language and to provide context.

I don't really understand why we're trying so hard to build models that can generate coherent texts based on having predigested only other texts, without any other experience of reality. Their capabilities appear already superhuman in their ability to imitate styles and patterns of any kind (including code generation, images, etc.). It feels like we're overshooting our target by trying to solve an unsolvable problem, that of deriving the semantics of reality from pure text, without any other type of input.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: