Lots of excellent references showing LLMs have rich internal models such as color spaces. Good argument meaning doesn't require "grounding" in perceptions (with citations).
>Good argument meaning doesn't require "grounding" in perceptions
Well, LLM's "meaning" IS grounded in perceptions anyway: the training data it was based on was created by people with perceptions, whose experience (and thus written output) was shaped and influenced by them. It's just second hand influence by perceptions experienced by someone else, and also not in a real time feedback loop (like with humans).