Has there been any consensus on this phenomenon I've seen where there are reports of decrease in model performance? This decrease in quality seems to take many forms: laziness, lower expressiveness, mistakes, laziness etc.
One the other hand there are people who claim there hasn't been any degradation at all[0]?
If there is indeed no degradation how could the perceived degradation be explained?
[0] https://community.openai.com/t/declining-quality-of-openai-m...
https://community.openai.com/t/declining-quality-of-openai-m...
https://www.reddit.com/r/OpenAI/comments/18sc92o/with_all_th...
http://arxiv.org/pdf/2307.09009
By being disproportionately impressed previously. Maybe in the early days people were so impressed by their little play experiments they forgave the shortcomings. Now that the novelty is wearing off and they try to use it for productive work, the scales tipped and failures are given more weight.