I have a perfect example of this but with Bard, not with ChatGPT.
The first day I used Bard I was blown away. It, maybe only by coincidence, was doing things that ChatGPT was not capable of doing.
For example, logical reasoning. I gave it a set of "actions" it could take, and a general problem to solve about navigating a robot within a 10x10 grid. I told it which direction it was facing to begin with, and where I wanted it to send the robot. Bard solved that problem, in the most efficient way, without error.
I've asked ChatGPT to solve similar problems and it's never been able to do this.
The second thing that happened on my first session with Bard, it explained a solution to a problem that I gave it, then I told it that I found another reference that gave a completely different solution. It then proceeded to tell me, basically, "No, I checked my work, it's correct. The reason the other site was giving a different solution was because it was using a different well-known equation to solve the same problem". In other words, they're both equations to solve the same KIND of problem, but because of their different formulas they each have different solutions. ChatGPT has, at least in my experience with it, never shown that kind of astuteness. I asked the same question to it, but it was just confused by what I was saying and couldn't answer in the same way that Bard did.
That said, ever since that first session I have also noticed a dramatic drop in Bard's answers. It repeats itself often, and only gives extremely general answers to questions I ask. Even when I prompt it for greater detail it always reverts to the extremely bland generic answer.
The first day I used Bard I was blown away. It, maybe only by coincidence, was doing things that ChatGPT was not capable of doing.
For example, logical reasoning. I gave it a set of "actions" it could take, and a general problem to solve about navigating a robot within a 10x10 grid. I told it which direction it was facing to begin with, and where I wanted it to send the robot. Bard solved that problem, in the most efficient way, without error.
I've asked ChatGPT to solve similar problems and it's never been able to do this.
The second thing that happened on my first session with Bard, it explained a solution to a problem that I gave it, then I told it that I found another reference that gave a completely different solution. It then proceeded to tell me, basically, "No, I checked my work, it's correct. The reason the other site was giving a different solution was because it was using a different well-known equation to solve the same problem". In other words, they're both equations to solve the same KIND of problem, but because of their different formulas they each have different solutions. ChatGPT has, at least in my experience with it, never shown that kind of astuteness. I asked the same question to it, but it was just confused by what I was saying and couldn't answer in the same way that Bard did.
That said, ever since that first session I have also noticed a dramatic drop in Bard's answers. It repeats itself often, and only gives extremely general answers to questions I ask. Even when I prompt it for greater detail it always reverts to the extremely bland generic answer.