Hacker News new | past | comments | ask | show | jobs | submit login

> AI is tackling problems that appear exponentially more difficult.

The hardest AI problems are the ones that involve multiple disciplines in deep ways. Here's a top tier artificial intelligence problem: given a plain English description of a computer program, implement it in source code.

There might be some cases where this is possible, and some cases are bound to fail.

Those are the kind of difficult problems in AI, which combine knowledge, understanding, thought, intuition, inspiration, and perspiration - or demand invention. We would be lucky to make linear progress in this area let alone exponential growth.

I think there's certainly an impression of exponential progress in AI in popular culture, but the search space is greater than factorial in size, and I think hackers should know that.




> To be fair, in terms of the complexity of rules, checkers is easier to understand than go which is easier to understand than chess. And honestly, go seems like the kind of brute-force simple, parallel problem that we can solve now without too much programming effort

Your intuition is mistaken. Go is indeed "easier to understand" than Chess in terms of its rules, but it is arguably harder to play well and has a way larger search space, which makes it less amenable to brute force, and this was precisely why people thought it'd be impossible for a computer to play it consistently at champion level.

I don't think the achievement of AlphaGo is solely due to increased processing power, otherwise why did people think Go was such a hard problem?


> it is arguably harder to play well and has a way larger search space, which makes it less amenable to brute force, and this was precisely why people thought it'd be impossible for a computer to play it consistently at champion level.

Are human champions not subject to those same difficulties of the game, though? When you're pitting the AI against another player who's also held back by the large branching factor of the search tree, then how relevant really is that branching factor anyway in the grand scheme of things? A lot of people talk about Go's search space as if human players magically aren't affected by it too. And the goal here was merely to outplay a human, not to find the perfect solution to the game in general.

(These are honest questions -- I am not an AI researcher of any kind.)


Go players rely heavily on pattern recognition and heuristics, something we know humans to be exceptionally good at.

For example, go players habitually think in terms of "shape"[1]. Good shape is neither too dense (inefficiently surrounding territory) or too loose (making the stones vulnerable to capture). Strong players intuitively see good shape without conscious effort.

Go players will often talk about "counting" a position[2] - consciously counting stones and spaces to estimate the score or the general strength of a position. This is in contrast to their usual mode of thinking, which is much less quantitative.

Go is often taught using proverbs[3], which are essentially heuristics. Phrases like "An eye of six points in a rectangle is alive" or "On the second line eight stones live but six stones die" are commonplace. They are very useful in developing the intuition of a player.

As I understand it, the search space is largely irrelevant to human players because they rarely perform anything that approximates a tree search. Playing out imaginary moves ("reading", in the go vernacular) is generally used sparingly in difficult positions or to confirm a decision arrived at by intuition.

Go is the board game that most closely maps to the human side of Moravec's paradox[4], because calculation has such low value. AlphaGo uses some very clever algorithms to minimise the search space, but it also relies on 4-5 orders of magnitude more computer power than Deep Blue.

  [1] https://en.wikipedia.org/wiki/Shape_(Go)
  [2] http://senseis.xmp.net/?Counting
  [3] https://en.wikipedia.org/wiki/Go_proverb
  [4] https://en.wikipedia.org/wiki/Moravec%27s_paradox


quoting https://news.ycombinator.com/item?id=10954918 :

> Go players activate the brain region of vision, and literally think by seeing the board state. A lot of Go study is seeing patterns and shapes... 4-point bend is life, or Ko in the corner, Crane Nest, Tiger Mouth, the Ladder... etc. etc.

> Go has probably been so hard for computers to "solve" not because Go is "harder" than Chess (it is... but I don't think that's the primary reason), but instead because humans brains are innately wired to be better at Go than at Chess. The vision-area of the human's brain is very large, and "hacking" the vision center of the brain to make it think about Go is very effective.


This is a great question!

Sadly, I'm neither an AI researcher nor a Go player; I think I've played less than 10 games. I don't know if we truly understand how great Go players play. About 10 years ago, when I was interested in Go computer players, I read a paper (I can't remember the title, unfortunately) that claimed that the greatest Go players cannot explain why they play the way the do, and frequently mention their use of intuition. If this is true, then we don't know how a human plays. Maybe there is a different thought process which doesn't involve backtracking a tree.


Sure.


The problem with Go was evaluating leaf nodes. Sure, you could quickly innumerate every possible position 6 moves out, but accurately deciding if a position 1 is better than position 2-2 billion is a really hard problem.

In that respect chess is a much simpler problem as you remove material from the board, prefer some locations over others etc. Where go is generally going to have the same number of pieces on each board and it's all about balancing local and board wide gains.


While I understand what you are getting at here, basically, this is still just a complete information game, and didn't solve AI. You are drastically understating the complexity of Go. It isn't actually possible to evaluate a significant fraction of the state tree in the early mid game because the branching factor is roughly 300. The major advance of AlphaGo is a reasonable state scoring function using deep nets.

Unless you have or are a PhD student in AI who has kept up with the current deep net literature I assure you that the whole of Alphago will be unintuitive to you. However, if you were an AI PhD student, you likely wouldn't be so dismissive about this achievement.


> The major advance of AlphaGo is a reasonable state scoring function using deep nets.

That and the policy network to prune the branching factor.


> Here's a top tier artificial intelligence problem: given a plain English description of a computer program, implement it in source code.

I would consider it a breakthrough if we could get human beings to do this at a decent rate :)


Even harder and more common problem -- given code, give a plain English description of what it is intended to do, and describe any shortcomings of the implementation.


Yeah e.g. you could get it to check whether it could go into an infinite loop.

Oh wait .... https://en.wikipedia.org/wiki/Halting_problem


You could for all practical purposes. The Halting problem only generally applies when you're considering all possible programs, but you really only need consider the well-written ones, because then you can filter out the poorly written ones.


Here's ia a top tier human intelligence problem: Given a requirement provide a accurate English description of a program.


Wait what is the plan to brute force go? The search space is beyond immense...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: