Hacker News new | past | comments | ask | show | jobs | submit login
People are bad at understanding just how big LLM's are (twitter.com/jam3scampbell)
3 points by bilsbie on Jan 19, 2024 | hide | past | favorite | 2 comments



He says

  People are really bad at understanding just how big LLM's actually are. I think this is partly why they belittle them as 'just' next-word predictors
I think it's actually the exact opposite: People think LLMs are "intelligent" or otherwise special (for example capable of being belittled) because they don't understand how big they are. Eliza or some other rules based chat that has, say, dozens of rules, is transparently just a gimmick. LLMs are not different, they just have billions of "rules", obfuscated with some math. But people don't get that and imagine they must be sentient because they give such nice responses. If people truly did comprehend the scale of LLMs, they'd be less likely to believe in any kind of intelligence.


I'm pretty sure he meant people are belittling the technology, not the individual models...

Anyway, aren't humans just a bunch of rules, obfuscated with some biology?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: