It would probably help if we stopped anthropomorphizing ChatGPT. It's an algorithm that consumes input and produces output. Assigning it human traits is asking for disappointment when it acts like a ML algorithm.
I think there are two distinct phenomena occurring here: there's emotionally treating nonsapient objects, plants, and animals as if they are friends, and coming to care about them, and then there's intellectually treating ML algorithms as if they are fully sapient, fully intelligent autonomous agents with the same basic capabilities as humans.
The former smooths the way for the latter, to be sure, but it does not require it. Almost no one who's putting googly eyes on a boulder is going to insist in all seriousness that Bouldy is capable of intelligent thought, or that it has rights that can be violated.
> Almost no one who's putting googly eyes on a boulder is going to insist in all seriousness that Bouldy is capable of intelligent thought, or that it has rights that can be violated.
You dare discriminate against my pet rock?! We can't be friends!
Engineer it out? Probably not. But folks acting as experts in these discussions should keep it in mind. Human analogies are easy, but when something is this close to the "Turing test" line, we should try and avoid them.
I think disappointment comes from the fact is acts like a ML algorithm that is specifically constrained and limited in its responses out of fear of woke backlash. That's the part that disappoints people, not the ML part.