Hacker News new | past | comments | ask | show | jobs | submit login

Four components:

Learning (viewing millions of professional game moves).

Experience (playing different versions of itself)

Intuition (ability to accurately estimate the value of a board)

Imagination (evaluating a series of "what if?" scenarios using Monte Carlo Tree Search)

I think the significant thing about AlphaGo is that apart from some hand-crafting in the Monte Carlo Tree Search routines, this is all general purpose programming.

It may only be baby-steps, but it does feel like a genuine step towards true (general) AI.




> Learning (viewing millions of professional game moves)

According to the last press conference, it was apparently strong amateur games from the internet that it used to train with. Afterwards, it just played itself, as you mentioned.


Yes, that was surprising to me as well. It seems unfair to not give it access to the thousands of years of knowledge in the go community, though even more impressive that it still plays so well.


I think self play games would be an even better source for learning because they are at 9p level, not amateur.


I dunno, neural nets don't do that well on chess, for instance, at least not the chess engine that was recently published (it reached an IM level which is really bad compared to top-notch chess-engines.) That convolutional neural nets work for Go better than for chess is intuitively unsurprising because of the different game rules. IMO it's at least as likely that this is a step towards a better Go engine but not towards "true AI."

(People say Go is much harder than chess, but this is misleading. Both games are finite trees that are too large to exhaustively search for any existing physical entity we know of. Which tree is larger is irrelevant in a game of two players none of whom can search the entire tree; both players essentially rely on heuristics. Machines beat people earlier in chess, hence it was assumed that "chess is easier for machines" and "Go is harder", but a conclusion of that sort can always be reversed by further research; eventually, it is IMO likely that machines will be impossible for humans to beat at both games, and generally in any kind of board game, given enough research. But IMO no board game is very much like "real life" where our own intelligence operates, and I think people do not have a great intuition of which game is more like "real life" compared to other games - instead, that game which is most popular among the group of people in question and is not "solved" yet is considered the hallmark of intelligence (and here the process through which Go aficionados progress as machines get better is very much like the process chess aficionados went through a decade plus ago.) Then once a game is "solved", the goalpost moves to the next and the "solved" game is officially declared unrelated to "real intelligence", this part happens when a credit bubble pops and AI breakthroughs get peddled less as a result. Personally, "the" test of intelligence is still the Turing test, or if I can't get that, some variant such as automated translation that you can't tell from good human translation. This of course is "unfair" to machines, in that they've been better at multiplying numbers since the 40s and that ought to count for something, too; the reason I like the Turing test is that a machine passing it seems very likely to be almost strictly smarter than me, that is, being as good or better than me at almost everything.)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: