Define understanding. And give evidence that humans have it. Seriously, I wish people would stop using terms like "understanding", "consciousness" and "sentience" until we know what it is (which is unlikely to ever happen).
> Define understanding. And give evidence that humans have it.
Defining such terms is notoriously difficult, but the evidence is readily available. A human cashier would've told someone ordering 18,000 waters and Taco Bell to go away, because a human understands why that request is nonsense.
I leave the why and the precise origin of that to the philosophers, not my field. That said as someone who experiences understanding and knows ordering 18,000 waters is nonsense, I feel qualified to say this LLM is not capable of it.
> I feel qualified to say this LLM is not capable of it.
This LLM have been demonstrated to be not capable, but there are no known reason why a LLM cannot dismiss such an order as nonsense - and you were claiming in the original comment that "LLMs do not possess any sort of understanding" and "These. Are not. Intelligent. Machines." A LLM fine-tuned to reject nonsensical requests would certainly be able to do so (another question is how well that would generalize - but then human aren't perfect in that regard either).
To be clear - I do not think LLMs are the universal solution to everything as they are being advertised. They do lack some unknown important component to intelligence. But using such anthropomorphic terms is really pointless - you are basically claiming "they will never be capable of doing something because they never will".