So much this. It seems both OpenAI and Microsoft try to change something about the model in their AI products, killing its performance.
In the case of Copilot chat, it could intelligently figure out and fix an issue given an error message from a linter. Nowadays it just rephrases the error message as a solution, effectively saying something like “To fix X, I would update the code so that it doesn’t happen” instead of fixing the code.
After paying OpenAI for almost a year, I canceled my GPT-4 subscription a week ago. More and more, where GPT was a replacement for Googling questions, now mostly gives vague answers or tells me things like "<put code here to do task>" instead of showing me the code.
I end up having to use search anyway, which defeats the point.
I find this a really sad development and I can't wait for open source models and hardware to catch up in ability. A year ago, GPT-4 was terrific and helpful.
GitHub's co-pilot still blows my mind daily. At least all AI tools are being degraded in capability.
I still remember people attempting to gaslight me saying it hasn't gotten worse at all when I have identical prompts from February where it would generously code out every single thing I asked for to now it literally just gives me 1 line of pseudocode with method headers for various sections and explains what I would want to do (while of course lacking important details). And then if I ask for it to implement something specifically it even still has more pseudocode comments INSIDE the function I asked it to implement.
In the case of Copilot chat, it could intelligently figure out and fix an issue given an error message from a linter. Nowadays it just rephrases the error message as a solution, effectively saying something like “To fix X, I would update the code so that it doesn’t happen” instead of fixing the code.