Technically it's the other way around. All LLMs do is hallucinate based on the training data + prompt. They're "dream machines". Sometimes those "dreams" might be useful (close to what the user asked for/wanted). Oftentimes they're not.
> to quote karpathy: "I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines."