No, because the LLM is a tool without any feeling and consciousness, like the article rightfully point out. It doesn't have the possibility to scrutinize it's own internals, nor the possibility to wonder if that would be something relevant to do.
Those who lie (possibly even to themselves) are those who pretend that mimicry if stretched enough will surpass the actual thing, and foster the deceptive psychological analogies like "hallucinate".
The LLM doesn't have a brain, it doesn't have consciousness, therefore it doesn't "hallucinate"; it just produces factually incorrect results.
It's just wrong, and then gives misleading explanations of how it got the wrong answer, following the same process that led to the wrong answer in the first place. Lying is a subset of being wrong.
The tech has great applications, why hype the stuff it doesn't do well? Or apply terms that misrepresent the process the s/w uses?
One might say the use of the word "hallucinate" is an analogy, but it's a poor analogy, which further misleads the lay public in what is actually happening inside the LLM, and how it's results are generated.
If you want to assert that "hallucinate" is an analogy, then "lying" is also an analogy.
If every prompt that ever went into an LLM was prefixed with: "Tell me a made up story about: ...", then the user expectation would be more in line with what the output represents.
I'm not averse to the tech in general, but I am against the rampant misrepresentation that's going on...
Those who lie (possibly even to themselves) are those who pretend that mimicry if stretched enough will surpass the actual thing, and foster the deceptive psychological analogies like "hallucinate".