Hacker News new | past | comments | ask | show | jobs | submit login

> No doubt? Seriously? What kind of knowledge do you have to make such statements?

Uh, click the link in the OP and find out? AI just beat a top 5 human professional 4-1. Go rankings put that AI at #2 in the world.

If AlphaGo improves at all at this point it will have achieved a level well beyond any human.

It is incredibly, ludicrously unlikely that AlphaGo has achieved the absolute peak of its design given that it went from an elo of ~2900 to ~3600 in just a few months.




There are actually a lot of room for improvement. Just some of the things:

(1) Better timing control. Maybe when the probability of winning reaches below say, 50% but has not hit the losing threshold, spend extra time.

(2) Introducing "anti-fragility". Maybe even train the net asymmetrically to play from losing positions to gain more experience with that.

(3) Debug and find out why it plays what looks like non-sense forcing moves when it thinks it is behind (assuming that is what is actually happening).

There's another interesting thing. Among the Go community, there might have been initially some misplaced pride. But the pros and the community very quickly changed their attitude about AlphaGo (as they have in the past when something that seems to not work, yet proves itself in games). They are seeing an opportunity for the advancement of Go as a game. I think a lot of the pros are very curious, even excited, and might be knocking on Google's doors to try to get access to AlphaGo.


To be fair, I think a larger sample size of human vs computer games are needed. Let the top pros train with the computers and we can measure what level is beyond any human.


Being the best ranked player != playing well beyond humans. When the AI can play 1,000 games and never lose that's well beyond people.

Granted, chess AI is basically at that point right now. But, go AI has a ways to go.


Given the leaps of progress made between this series of games and the previous series in only a few months, I'd expect "never lose" will become a recognized reality in about a year.


Possibly, it's not clear if AlphaGo is playing better or simply approaching the game differently. Game five was close and AlphaGo seemed to mostly win due to time considerations.

PS: Honestly, it might be a year or a decade, but I suspect there is plenty of headroom to drastically surpass human play.


When AlphaGo does lose, it seems to happen when outright bugs cause it to make moves that are readily recognizable as mistakes. It doesn't seem to happen because it's not quite "smart" enough, or because its underlying algorithms are fundamentally flawed.

That's a big difference. Bugs can be identified and fixed. By the time AlphaGo faces another top professional (Ke Jie?) we can safely assume that whatever went wrong in Game 4 won't happen again.

Consider how much stronger the system has become in the few months since the match against Fan Hui. Another advance like that will place it far beyond the reach of anything humans will ever be able to compete with.


> When AlphaGo does lose, it seems to happen when outright bugs cause it to make moves that are readily recognizable as mistakes

I'm not sure this is true. It made the wrong move at move 79 in game 4, but I'm not sure that should be considered an obvious mistake.

My understanding is that the moves that people said were most obviously mistakes later in the game were a result of it being behind (and desperately trying to swing the lead back in its favor), rather than a cause.


Go rankings put that AI at #2 in the world.

Go rankings weren't designed for ML algorithms, which can have high-level deficiencies and behave erratically under certain conditions.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: