People said for years that Go would never be beaten in our lifetime. They said this because Go has a massive search space. It can't be beaten by brute force search. It requires intelligence, the ability to learn and recognize patterns.
And it requires doing that at the level of a human. A brute force algorithm can beat humans by doing a stupid thing far faster than a human can. But a pattern recognition based system has to beat us by playing the same way we do. If humans can learn to recognize a specific board pattern, it also has to be able to learn that pattern. If humans can learn a certain strategy, it also has to be able to learn that strategy. All on it's own, through pattern recognition.
And this leads to a far more general algorithm. The same basic algorithm that can play Go, can also do machine vision, it can compose music, it can translate languages, or it could drive cars. Unlike the brute force method that only works one one specific task, the general method is, well, general. We are building artificial brains that are already learning to do complex tasks faster and better than humans. If that's not progress towards AGI, I don't know what is.
Those arguments absolutely are wrong. For one thing it's classic hindsight bias. When you make a wrong prediction, you should update your model, not come up with justifications why your model doesn't need to change.
But second, it's another bias, where nothing looks like AI, or AI progress. People assume that intelligence should be complicated, that simple algorithms can't have intelligent behavior. That human intelligence has some kind of mystical attribute that can't be replicated in a computer.
Whenever AI beats a milestone, there are a bunch of over-optimists that come out and make predictions about AGI. They have been wrong over and over again over the course of half a century. It's classic hindsight bias.
And the optimists are being proven right. AGI is almost here.
I know this is true, because there are already a lot of people that think the Turing test isn't valid. They believe it could be beaten by a stupid chatbot, or deception on the part of AI. Just search for past discussions on HN of the Turing test, it comes up a lot.
There is no universally accepted benchmark of AI. Let alone a benchmark for AI progress, which is what Go is.
No one claimed that Go would require a human level AI to beat. But I am claiming that beating it represents progress towards that goal. Whereas passing the Turing test won't happen until the very end. Beating games like Go are little milestones along the journey.
Many of the chatbot Turing test competitions have heavily rigged rules restricting the kinds of questions you're allowed to ask in order to give the bots a chance.
(the answer is Rocky Road by the way)