> Generating a convincing response and telling lies, or not, are not related.
My point exactly. GPT does the former and doesn't concern itself with the latter.
"Lie" implies an intent. There is no lie there, these are perfectly fine answers to your questions. They're just unrelated to the model, as it has no real concept of "I". You can imagine someone answering these questions that way, and that's all that matters - the model did its job well.
My point exactly. GPT does the former and doesn't concern itself with the latter.
"Lie" implies an intent. There is no lie there, these are perfectly fine answers to your questions. They're just unrelated to the model, as it has no real concept of "I". You can imagine someone answering these questions that way, and that's all that matters - the model did its job well.