Yes, I have noticed that their coding capabilities have been reduced greatly. Before, whenever I asked a question. ChatGPT sometimes gave me an incorrect answer, but it was able to fix after a follow-up question. Nowadays, whenever the answer is incorrect, no matter how many times I tried to get the correct answer. It will always return a wrong answer. It is becoming so frustrating that I am starting to use google/stackoverflow more frequently again.
I have a perfect example of this but with Bard, not with ChatGPT.
The first day I used Bard I was blown away. It, maybe only by coincidence, was doing things that ChatGPT was not capable of doing.
For example, logical reasoning. I gave it a set of "actions" it could take, and a general problem to solve about navigating a robot within a 10x10 grid. I told it which direction it was facing to begin with, and where I wanted it to send the robot. Bard solved that problem, in the most efficient way, without error.
I've asked ChatGPT to solve similar problems and it's never been able to do this.
The second thing that happened on my first session with Bard, it explained a solution to a problem that I gave it, then I told it that I found another reference that gave a completely different solution. It then proceeded to tell me, basically, "No, I checked my work, it's correct. The reason the other site was giving a different solution was because it was using a different well-known equation to solve the same problem". In other words, they're both equations to solve the same KIND of problem, but because of their different formulas they each have different solutions. ChatGPT has, at least in my experience with it, never shown that kind of astuteness. I asked the same question to it, but it was just confused by what I was saying and couldn't answer in the same way that Bard did.
That said, ever since that first session I have also noticed a dramatic drop in Bard's answers. It repeats itself often, and only gives extremely general answers to questions I ask. Even when I prompt it for greater detail it always reverts to the extremely bland generic answer.
Did you find any evidence? In my experience Bard got less usable (I cannot say dumber). I used to ask it to fix grammar for sentences. Now it says it is an AI assistant and it can't do that. But it works for some other people :/
Overall it is very confusing.
I kind of feel like the next logical step for this is someone will invent a way to do a crowd-sourced SETI-style solution for building these large AI models in a collaborative way. I think based on my experience it's clear they've decided to rein in these tools so users can't experience the true capabilities of them.
There are new open source models popping up every day and soon they'll be forced to compete by taking safeguards off....unless they convince congress to take action and make an ATF like organization that shows up at your door for posting an AI model on hugging face.