Hacker News new | past | comments | ask | show | jobs | submit login

LLMs and essentially all neural networks can be viewed as learning compression algorithms where the behavior of the compression algorithm is learned and subject to potential constraints beyond mere file reconstruction.

Highly recommend reading Ted Chiang's "ChatGPT Is a Blurry JPEG of the Web"[0] to get a better sense of this.

Keeping this fact in your mental model neural networks can also go a long way to demystify them.

0. https://www.newyorker.com/tech/annals-of-technology/chatgpt-...




(The human brain is also, in part, a blurry JPEG of the world.)


We currently have no reason to believe this, and information we do have seems to suggest that is very unlikely to be the case. I'm also guessing from my username you can infer that I don't think we even know enough to concretely say what is this "world" you are referencing.


I don't know what exactly blurry jpeg means to you but we have every reason to believe we operate on shortcuts of reality, not reality. Nearly all your brain does with sense data is warp it to confirm to internal predictions in numerous ways.

Memories are always part fabrications. You can't return to previous mental states (you only think you do) and we have no real clue what really informs decisions i.e preferences shape choices just as much as choices shape preferences.

Your brain will happily fabricate rationals you sincerely believe for decision that couldn't possibly be true i.e split brain experiments


I for one massively compress my experience. I remember things on autocomplete. I have memories where different time periods are mixed together: my recollection of a room will have furniture in it that was only added later, for instance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: