These are worse as they imply the thing generating the words knows the truth and purposely says something else.
An LLM is just doing next token prediction. It's a mathematical process. It's not trying to "hide" the truth from you.
These are worse as they imply the thing generating the words knows the truth and purposely says something else.
An LLM is just doing next token prediction. It's a mathematical process. It's not trying to "hide" the truth from you.