Hacker News new | past | comments | ask | show | jobs | submit login
Meaning without reference in large language models (arxiv.org)
1 point by jedharris on Feb 22, 2023 | hide | past | favorite | 2 comments



Lots of excellent references showing LLMs have rich internal models such as color spaces. Good argument meaning doesn't require "grounding" in perceptions (with citations).


>Good argument meaning doesn't require "grounding" in perceptions

Well, LLM's "meaning" IS grounded in perceptions anyway: the training data it was based on was created by people with perceptions, whose experience (and thus written output) was shaped and influenced by them. It's just second hand influence by perceptions experienced by someone else, and also not in a real time feedback loop (like with humans).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: