"your comment confidently states this is unfixable - presumably based on the frequency you've seen similar text on the internet. why should anyone believe the veracity of your statement? "
Yep, the idea of grounding seems interesting to me. Everything in a LLM is just a statistical dream at this point with no 'reality basis' at this point. I wonder if it's possible to give the language model grounding points of things that are real and building a truth model from that.
no its because GPT is based on transformers.