Because it’s an expression of exasperation. A customer support bot would attempt to be helpful at all times, either answering the question (wrongly or not) or changing the subject.
Only the v1 naive customer service bot will attempt to be helpful at all times.
The better product managers will quickly pick up on the (measurable) user frustration generated that way, and from there it's technically trivial to alter the LLM prompts such that the bot simulates being annoyed in a plausible way when tested like this.
That hypothetical scenario is as absurd as it is irrelevant. I don’t understand why some humans feel such a burning need to be LLM apologists.
Companies don’t want customer support representatives to show annoyance or frustration even if they’re human; wasting time and resources adding that unhelpful behaviour to chatbots just to deliberately trick the minute number of people who use these techniques is absurd.
And it’s irrelevant because (it should be obvious) neither I nor the OP claimed this was a solution to distinguish humans from bots forever. It was just a funny story of something that was effective now in one specific scenario.