The "moving goalposts" argument is one that really needs to die. It's a classic empty statement. Just because other people made <argument> in the past does not mean it's wrong. It proves nothing. People also predicted AGI many times over-optimistically; probably just as often as people have moved goalposts.
I don't know what you are trying to say. I'm making an observation that whenever AI beats a milestone, there are a bunch of pessimists that come out and say "but obviously X was beatable by stupid algorithms. I will beleive AI is making progress when it beats Y!"
Those arguments absolutely are wrong. For one thing it's classic hindsight bias. When you make a wrong prediction, you should update your model, not come up with justifications why your model doesn't need to change.
But second, it's another bias, where nothing looks like AI, or AI progress. People assume that intelligence should be complicated, that simple algorithms can't have intelligent behavior. That human intelligence has some kind of mystical attribute that can't be replicated in a computer.
I said exactly what I said. Calling out "moving the goalposts" does not refute the assertion that this does not get us nontrivially closer to AGI.
Whenever AI beats a milestone, there are a bunch of over-optimists that come out and make predictions about AGI. They have been wrong over and over again over the course of half a century. It's classic hindsight bias.
Yes it does! If you keep changing what you consider "AI", every time it makes progress, then it looks like we are never getting closer to AI. When in fact it is just classic moving goalposts.
And the optimists are being proven right. AGI is almost here.