For the first the big difference between genAI hallucinations and people imo is that you can meaningfully reason with a person, address where they may be getting incorrect information, and assuming they're willing get them to do better in the future. None of that is really possible with things like ChatGPT at the moment. You can't teach it anything you have to wait for a new version. You can give some preambles that get it to work a little better, thinking logically or showing step by step, but it still can't count the number of Rs in strawberry.
The r's in strawberry thing is an artifact of tokenization. Ask it to spell it out with dashes between the letters or something first and it'll do fine.