You're right; but it should be noted that that is a design choice not an inherent property of the technology.
Currently, the design goal isn't to make LLMs feel "lifelike." It is likely that lifelike LLMs will be released in the future, which could result in sarcastic or dismissive replies to poor questions or missing information.
Currently, the design goal isn't to make LLMs feel "lifelike." It is likely that lifelike LLMs will be released in the future, which could result in sarcastic or dismissive replies to poor questions or missing information.