It's a similar case in principle, but devil is in the details. GPT can be very convincing at being human-like. Not only that, but it can "empathize" etc - if anything, RLHF seems to have made this more common. When you add that to hallucinations, what you end up with is a perfect conman who doesn't even know it's a conman, but can nevertheless push people's emotional buttons to get them to do things they really shouldn't be doing.
Yes, eventually we'll all learn better. But that will take time, and detailed explanations will also take time and won't reach much of the necessary audience. And Bing is right here right now for everyone who is willing to use it. So I'm less concerned about OpenAI's reputation than about people following some hallucinated safety protocol because the bot told them that it's "100% certain" that it is the safe and proper way to do something dangerous.
Yes, eventually we'll all learn better. But that will take time, and detailed explanations will also take time and won't reach much of the necessary audience. And Bing is right here right now for everyone who is willing to use it. So I'm less concerned about OpenAI's reputation than about people following some hallucinated safety protocol because the bot told them that it's "100% certain" that it is the safe and proper way to do something dangerous.