Hacker News new | past | comments | ask | show | jobs | submit login

This is interesting. The blog post links several papers, and I recommend reading them.

Responses here however seem not commensurate with the evidence presented. Two of the papers[0][1] that provide the sources for the illustration in the blog post are about research conducted on a very small group of subjects. They measure neural activity when listening to a 30 minutes podcast (5000 words). Participants tried to guess next words. All the talk about "brain embedding" is derived from interpreting neuronal activity and sensor data geometrically. It is all very contrived.

Very interesting stuff from a neuroscience, linguistics and machine learning perspective. But I will quote from the conclusion of one of the papers[1]: "Unlike humans, DLMs (deep language models) cannot think, understand or generate new meaningful ideas by integrating prior knowledge. They simply echo the statistics of their input"

[0] Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns (https://www.nature.com/articles/s41467-024-46631-y)

[1] Shared computational principles for language processing in humans and deep language models (https://www.nature.com/articles/s41593-022-01026-4)




>"Unlike humans, DLMs (deep language models) cannot think, understand or generate new meaningful ideas by integrating prior knowledge. They simply echo the statistics of their input"

[Citation needed]. Actually, the paper does give a citation (G.F. Marcus, the Algebraic Mind), that is from 2019 according to their citation list (i.e. before gpt3), but actually seems to be from the early 2000s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: