People are really bad at understanding just how big LLM's actually are. I think this is partly why they belittle them as 'just' next-word predictors
I think it's actually the exact opposite: People think LLMs are "intelligent" or otherwise special (for example capable of being belittled) because they don't understand how big they are. Eliza or some other rules based chat that has, say, dozens of rules, is transparently just a gimmick. LLMs are not different, they just have billions of "rules", obfuscated with some math. But people don't get that and imagine they must be sentient because they give such nice responses. If people truly did comprehend the scale of LLMs, they'd be less likely to believe in any kind of intelligence.