Hacker News new | past | comments | ask | show | jobs | submit login

>A few months back, the expert consensus was that we were many years away from an AI playing Go at the 9-dan level.

These kinds of predictions are almost always useless. You can always find people who say it'll take n years before x happens, but no one can predict which approaches will work, and how much improvement they'll confer.

> What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.

What? This is a non-sequitur. Continued advancement doesn't mean that it is accelerating, and even if this does represent an unexpected achievement that doesn't mean that future development will maintain that pace.

Appreciate it for what it is - an historic achievement for AI & ML - and stop trying to attach broader significance to it.




> These kinds of predictions are almost always useless.

Let's rephrase. For a long time, the expert consensus regarding Go was that it was extremely difficult to write strongly-performing AI for. From the AlphaGo Paper: Go presents "difficult decision-making tasks; an intractable search space; and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function."

For many years, the state-of-the-art Go AI stagnated or grew very slowly, reaching at most the amateur dan level. AlphaGo presents a huge and surprising leap.

> Continued advancement doesn't mean that it is accelerating

Over constant time increases, AI is tackling problems that appear exponentially more difficult. In particular, see Checkers (early '90s) vs Chess ('97) vs Go ('16). The human advantage has generally been understood to be the breadth of the game tree, nearly equivalent to the complexity of the game.

If we let x be the maximum complexity of a task at which AI performs as well as a human, then I would argue that x has been growing at an accelerating pace over the past few decades.


"and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function."

To be clear, the above refers to specific concepts in Reinforcement Learning.

A policy is a function from state (in Go, where all the stones are) to action (where to place the next stone). I agree that it is unlikely to have an effective policy function. At least one that is calculated efficiently (no tree search)... otherwise its not what a Reinforcement Learning researcher typically calls a policy function.

A value function is is a function from state to numerical "goodness", and is more or less one step removed from a policy function: you can choose the action that takes you to the state with the highest value. It has the same representational problems found there.


> AI is tackling problems that appear exponentially more difficult.

The hardest AI problems are the ones that involve multiple disciplines in deep ways. Here's a top tier artificial intelligence problem: given a plain English description of a computer program, implement it in source code.

There might be some cases where this is possible, and some cases are bound to fail.

Those are the kind of difficult problems in AI, which combine knowledge, understanding, thought, intuition, inspiration, and perspiration - or demand invention. We would be lucky to make linear progress in this area let alone exponential growth.

I think there's certainly an impression of exponential progress in AI in popular culture, but the search space is greater than factorial in size, and I think hackers should know that.


> To be fair, in terms of the complexity of rules, checkers is easier to understand than go which is easier to understand than chess. And honestly, go seems like the kind of brute-force simple, parallel problem that we can solve now without too much programming effort

Your intuition is mistaken. Go is indeed "easier to understand" than Chess in terms of its rules, but it is arguably harder to play well and has a way larger search space, which makes it less amenable to brute force, and this was precisely why people thought it'd be impossible for a computer to play it consistently at champion level.

I don't think the achievement of AlphaGo is solely due to increased processing power, otherwise why did people think Go was such a hard problem?


> it is arguably harder to play well and has a way larger search space, which makes it less amenable to brute force, and this was precisely why people thought it'd be impossible for a computer to play it consistently at champion level.

Are human champions not subject to those same difficulties of the game, though? When you're pitting the AI against another player who's also held back by the large branching factor of the search tree, then how relevant really is that branching factor anyway in the grand scheme of things? A lot of people talk about Go's search space as if human players magically aren't affected by it too. And the goal here was merely to outplay a human, not to find the perfect solution to the game in general.

(These are honest questions -- I am not an AI researcher of any kind.)


Go players rely heavily on pattern recognition and heuristics, something we know humans to be exceptionally good at.

For example, go players habitually think in terms of "shape"[1]. Good shape is neither too dense (inefficiently surrounding territory) or too loose (making the stones vulnerable to capture). Strong players intuitively see good shape without conscious effort.

Go players will often talk about "counting" a position[2] - consciously counting stones and spaces to estimate the score or the general strength of a position. This is in contrast to their usual mode of thinking, which is much less quantitative.

Go is often taught using proverbs[3], which are essentially heuristics. Phrases like "An eye of six points in a rectangle is alive" or "On the second line eight stones live but six stones die" are commonplace. They are very useful in developing the intuition of a player.

As I understand it, the search space is largely irrelevant to human players because they rarely perform anything that approximates a tree search. Playing out imaginary moves ("reading", in the go vernacular) is generally used sparingly in difficult positions or to confirm a decision arrived at by intuition.

Go is the board game that most closely maps to the human side of Moravec's paradox[4], because calculation has such low value. AlphaGo uses some very clever algorithms to minimise the search space, but it also relies on 4-5 orders of magnitude more computer power than Deep Blue.

  [1] https://en.wikipedia.org/wiki/Shape_(Go)
  [2] http://senseis.xmp.net/?Counting
  [3] https://en.wikipedia.org/wiki/Go_proverb
  [4] https://en.wikipedia.org/wiki/Moravec%27s_paradox


quoting https://news.ycombinator.com/item?id=10954918 :

> Go players activate the brain region of vision, and literally think by seeing the board state. A lot of Go study is seeing patterns and shapes... 4-point bend is life, or Ko in the corner, Crane Nest, Tiger Mouth, the Ladder... etc. etc.

> Go has probably been so hard for computers to "solve" not because Go is "harder" than Chess (it is... but I don't think that's the primary reason), but instead because humans brains are innately wired to be better at Go than at Chess. The vision-area of the human's brain is very large, and "hacking" the vision center of the brain to make it think about Go is very effective.


This is a great question!

Sadly, I'm neither an AI researcher nor a Go player; I think I've played less than 10 games. I don't know if we truly understand how great Go players play. About 10 years ago, when I was interested in Go computer players, I read a paper (I can't remember the title, unfortunately) that claimed that the greatest Go players cannot explain why they play the way the do, and frequently mention their use of intuition. If this is true, then we don't know how a human plays. Maybe there is a different thought process which doesn't involve backtracking a tree.


Sure.


The problem with Go was evaluating leaf nodes. Sure, you could quickly innumerate every possible position 6 moves out, but accurately deciding if a position 1 is better than position 2-2 billion is a really hard problem.

In that respect chess is a much simpler problem as you remove material from the board, prefer some locations over others etc. Where go is generally going to have the same number of pieces on each board and it's all about balancing local and board wide gains.


While I understand what you are getting at here, basically, this is still just a complete information game, and didn't solve AI. You are drastically understating the complexity of Go. It isn't actually possible to evaluate a significant fraction of the state tree in the early mid game because the branching factor is roughly 300. The major advance of AlphaGo is a reasonable state scoring function using deep nets.

Unless you have or are a PhD student in AI who has kept up with the current deep net literature I assure you that the whole of Alphago will be unintuitive to you. However, if you were an AI PhD student, you likely wouldn't be so dismissive about this achievement.


> The major advance of AlphaGo is a reasonable state scoring function using deep nets.

That and the policy network to prune the branching factor.


> Here's a top tier artificial intelligence problem: given a plain English description of a computer program, implement it in source code.

I would consider it a breakthrough if we could get human beings to do this at a decent rate :)


Even harder and more common problem -- given code, give a plain English description of what it is intended to do, and describe any shortcomings of the implementation.


Yeah e.g. you could get it to check whether it could go into an infinite loop.

Oh wait .... https://en.wikipedia.org/wiki/Halting_problem


You could for all practical purposes. The Halting problem only generally applies when you're considering all possible programs, but you really only need consider the well-written ones, because then you can filter out the poorly written ones.


Here's ia a top tier human intelligence problem: Given a requirement provide a accurate English description of a program.


Wait what is the plan to brute force go? The search space is beyond immense...


> If we let x be the maximum complexity of a task at which AI performs as well as a human, then I would argue that x has been growing at an accelerating pace over the past few decades.

At ONE task, yes. But humans are average at many things but excel at being able to adapt to many different tasks, all the time. Typical AIs (as we know them now) cannot ever hope to replicate that.


This seems to have been linked to a lot recently but I feel it is relevant to the discussion on technology advances pertaining to AI.

http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


> Continued advancement doesn't mean that it is accelerating, and even if this does represent an unexpected achievement that doesn't mean that future development will maintain that pace.

Advancement faster than predictions does mean accelerating advancement, coupled with the (true) fact that people's predictions tend to assume a constant rate of advancement [citation needed]. Actually, all you'd need to show accelerating advancement is a trend of conservative predictions and the fact that these predictions assume a non-decreasing rate of advancement; if we're predicting accelerating advancement and still underestimating its rate, advancement must still be accelerating.

It even seems like this latter case is where we're at, since people who assume an accelerating rate of advancement see to assume that the rate is (loosely) quadratic. However, given that the rate of advancement tends to be based on the current level of advancement (a fair approximation, since so many advancements themselves help with research and development), we should expect it to be exponential. That's what exponential means.

However, the reality seems like it might be even faster than exponential. This is what the singularitarians think. When you plot humanity's advancements using whatever definition you like, look at the length of time between them to approximate rate, and then try to fit this rate to a regression, it tends to fit regressions with vertical asymptotes.


> These kinds of predictions are almost always useless. You can always find people who say it'll take n years before x happens, but no one can predict which approaches will work, and how much improvement they'll confer.

True, but it's pretty refreshing to have a prediction about AI being N years from something that is wrong in the OTHER direction.

Contrary to your point about 'appreciate it for what it is', there is ONE lesson I hope people take from it: You can't assume AI progression always remains in the future.

A general cycle I've seen repeated over and over:

* sci-fi/futurists make a bunch of predictions * some subset of those predictions are shown to be plausible * general society ignores those possibilities * an advancement happens with general societal implications * society freaks out

Whether it's cloning (ala Dolly the Sheep, where people demonstrated zero understanding of what genetic replication was e.g. a genetic clone isn't "you") or self-driving cars (After decades of laughing at the idea because "who would you sue?", suddenly society is scrambling to adjust because they never wanted to think past treating that question as academic), or everyone having an internet-connected phone in their pocket (see encryption wars...again), or the existence of a bunch of connected computers with a wealth of knowledge available, society has always done little to avoid knee-jerk reactions.

Now we have AI (still a long way off from AGI, granted) demonstrating not only can it do things we thought weren't going to happen soon (see: Siri/Echo/Cortana/etc), but breaking a major milestone sooner than most anyone thought. We've been told for a long time that because of typical technology patterns, we should expect that the jump from "wow" to "WOW!" will happen pretty quickly. We've got big thinkers warning of the complications/dangers of AI for a long time.

And to date, AI has only been a big joke to society, or the villain of B-grade movies. It'd be nice, if just once, society at least gave SOME thought to the implications a little in advance.

I don't know when an AGI will occur - years, decades, centuries - but I'm willing to bet it takes general society by surprise and causes a lot of people to freak out.


> > What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.

> What? This is a non-sequitur. Continued advancement doesn't mean that it is accelerating, and even if this does represent an unexpected achievement that doesn't mean that future development will maintain that pace.

It's not a non-sequitur, but there is an implicit assumption you perhaps missed. The assumption is that the human failure to predict this AI advance is caused by an evolution curve with order higher than linear. You see, humans are amazingly good at predicting linear change. We are actually quite good at predicting x² changes (frisbee catching). Higher than that, we are useless. Even at x², we fail in some scenarios (braking distance at unusual speeds, like 250km/h on the autobahn for example).

The fact that it will maintain its pace is an unfounded assumption. However, assuming that the pace will slow is as unfounded. All in all, I'd guess it is safest to assume tech will evolve as it has in the last 5000 years.

That would be an exponential evolution curve.


These kind of statements are only valuable to me if they are followed by "And these are the challenges that need to be overcome which are being worked on".

Otherwise it's a blanket retort. It's like saying "There are lots of X".

Ok, name 7. If you get stuck after 2 or 3 you're full of it.


>>You can always find people who say it'll take n years before x happens

Interesting, people seem to be saying the same about self driving cars.


You sound like the kinda person who says "AI will never drive," "AI will never play Go." True there's a lot of hype, which ML experts are concerned may lead to another burst & winter. On the flip-side there's a lot of curmudgeonly nay-sayers such as yourself at which ML experts roll their eyes and forge ahead. What I find is both extremes don't understand ML, they're just repeating their peers. ML is big, and it's gonna do big things. Not "only Go", not "take over the world"; somewhere in between.


I'm actually very optimistic about the state of AI and ML lately. The difference is that I don't anthropomorphize the machines or ascribe human values to their behavior. I absolutely believe AI will drive (and save lives); I have always believed that AI will play Go; I believe that AI will grow to match and surpass humans in many things we assume that only humans can do. Humans aren't perfect, but that doesn't mean that machines who outperform us are perfect either.

AlphaGo plays Go. It probably doesn't play Go like a human (because a human probably can't do what it does), but that's OK because it also appears to be better than humans. AlphaGo is interesting not because it has done something impossible, but because it has proven possible a few novel ideas that could find other interesting applications, and adds another notch to the belt of a few other tried and tested techniques.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: