Hacker News new | past | comments | ask | show | jobs | submit login

>Humans have those exact same constraints. For the longest time we could only speculate what the dark side of the moon looked like, for instance.

That isn't the exact same constraint. We could speculate that the moon had a "dark side," because we understood what a moon was, and what a sphere was. LLMs cannot speculate about things outside of their existing data model, at all.

>When we talk about things we don't understand we speak gibberish, just like LLMs.

No we don't, wtf? We may create inaccurate models or theories, but we don't just chain together random strings of words the way LLMs do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: