Hacker News new | past | comments | ask | show | jobs | submit login

Welcome to the AI effect! Every time AI makes an accomplishment, it is disregarded. The goalposts are perpetually moved. "AI is whatever computers can't do yet."

People said for years that Go would never be beaten in our lifetime. They said this because Go has a massive search space. It can't be beaten by brute force search. It requires intelligence, the ability to learn and recognize patterns.

And it requires doing that at the level of a human. A brute force algorithm can beat humans by doing a stupid thing far faster than a human can. But a pattern recognition based system has to beat us by playing the same way we do. If humans can learn to recognize a specific board pattern, it also has to be able to learn that pattern. If humans can learn a certain strategy, it also has to be able to learn that strategy. All on it's own, through pattern recognition.

And this leads to a far more general algorithm. The same basic algorithm that can play Go, can also do machine vision, it can compose music, it can translate languages, or it could drive cars. Unlike the brute force method that only works one one specific task, the general method is, well, general. We are building artificial brains that are already learning to do complex tasks faster and better than humans. If that's not progress towards AGI, I don't know what is.

The "moving goalposts" argument is one that really needs to die. It's a classic empty statement. Just because other people made <argument> in the past does not mean it's wrong. It proves nothing. People also predicted AGI many times over-optimistically; probably just as often as people have moved goalposts.

I don't know what you are trying to say. I'm making an observation that whenever AI beats a milestone, there are a bunch of pessimists that come out and say "but obviously X was beatable by stupid algorithms. I will beleive AI is making progress when it beats Y!"

Those arguments absolutely are wrong. For one thing it's classic hindsight bias. When you make a wrong prediction, you should update your model, not come up with justifications why your model doesn't need to change.

But second, it's another bias, where nothing looks like AI, or AI progress. People assume that intelligence should be complicated, that simple algorithms can't have intelligent behavior. That human intelligence has some kind of mystical attribute that can't be replicated in a computer.

I said exactly what I said. Calling out "moving the goalposts" does not refute the assertion that this does not get us nontrivially closer to AGI.

Whenever AI beats a milestone, there are a bunch of over-optimists that come out and make predictions about AGI. They have been wrong over and over again over the course of half a century. It's classic hindsight bias.

Yes it does! If you keep changing what you consider "AI", every time it makes progress, then it looks like we are never getting closer to AI. When in fact it is just classic moving goalposts.

And the optimists are being proven right. AGI is almost here.

This doesn't address my argument at all.

As far as I know the goal post of Turing test has never moved.

That's because it hasn't been beaten yet! As soon as a chatbot beats a turing test, there will be a lot of AI deniers come out and say that the Turing test doesn't measure 'real' intelligence, or isn't a valid test for <reasons>.

I know this is true, because there are already a lot of people that think the Turing test isn't valid. They believe it could be beaten by a stupid chatbot, or deception on the part of AI. Just search for past discussions on HN of the Turing test, it comes up a lot.

There is no universally accepted benchmark of AI. Let alone a benchmark for AI progress, which is what Go is.

No one claimed that Go would require a human level AI to beat. But I am claiming that beating it represents progress towards that goal. Whereas passing the Turing test won't happen until the very end. Beating games like Go are little milestones along the journey.

Viewed that way, I'll accept that beating Go represent progress. That's not the same as saying that it represents imminent evidence that singularity style strong AI is almost upon us, as suggested in the post I was replying to. In the long term it might turn out to represent very minimal progress towards that goal.

Chatbots can already beat the turing test.

Chatbots ate trivially easy to beat. Just try teaching it a simple game and ask it to play it with you. Basically any questions that require it to form a mental model of something and mutate or interrogate the model state.

Many of the chatbot Turing test competitions have heavily rigged rules restricting the kinds of questions you're allowed to ask in order to give the bots a chance.

Only for bad judges. Just ask any AI - 'what flavour do you reckon a meteorite is' or something weird like that, and watch it try and equivocate.

(the answer is Rocky Road by the way)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact