It's really, really not. The post is saying "look at how useful our product is", where "our product" here extends significantly beyond the ChatGPT that the general public uses. Using embeddings search to retrieve documents is credible and reliable
It's absolutely wild to claim that they're trying to use this to indicate that the general use chatbot version of ChatGPT is "credible, authoritative and reliable" when every user is shown a full screen message to the contrary upon signup, and a disclaimer to the contrary on every single chat. For that matter, even if you just straight up ask ChatGPT if it's a reliable source, they've specifically trained it to tell you that it's not.
“Cognitive dissonance” is the feeling of discomfort one feels when trying to hold two contradictory ideas in their head at once. I’m not sure how you’re using the term, but it doesn’t sound like you’re using it to mean that.
At any rate, it kind of seems like your argument would apply to any marketing of ChatGPT as a useful product. It sounds like this is a legitimate use case for the technology that is being discussed in accurate terms. I’m not sure what else you want from them.
It refers the type of advertising that intentionally includes contradictory messaging (effectively, talking out of both sides of its mouth).
That's why it isn't "wild" at all for them to intentionally put out messaging that touts their product as authoritative and trustworthy - while at the same time presenting ToS boilerplate and other disclaimers that, of course, say the exact opposite. It is in fact the precise intent of this messaging to play with the users' heads in this way.
It kind of seems like your argument would apply to any marketing of ChatGPT as a useful product.
No - just marketing that disingenuously implies that, at the end of the day - it's still a reliable and trustworthy source of information. On which to base decisions worth billions and billions of dollars, no less.
But again, it doesn’t imply that. It implies that a completely different system backed by an actual database of verified information is reliable, which is true. How would you suggest they talk about that technology?
It's absolutely wild to claim that they're trying to use this to indicate that the general use chatbot version of ChatGPT is "credible, authoritative and reliable" when every user is shown a full screen message to the contrary upon signup, and a disclaimer to the contrary on every single chat. For that matter, even if you just straight up ask ChatGPT if it's a reliable source, they've specifically trained it to tell you that it's not.