Granted maybe it was a bit unclear, so let me claify my point:
In humans hallucination is about a loss of a relationship with an underlying physical world. A physical world whose model we have in our heads and interact with in intentional ways if we are not hallucinating.
That means using the word hallucinating implies that the thing could also not be hallucinating and have a grip on reality. And rhis was my criticism, a LLM spits out plausible phrases, if the graph wouldn't consider an output plausible it wouldn't return it. That means for the LLM there is no difference between plausible bogus and a factually correct statement, this is something humans interpret into the output from the outside.