Hacker News new | past | comments | ask | show | jobs | submit login

I think achieving superiority over humans is practically solving the problem though. Solving chess or go by going through a complete search space seems more like a hardware/computational goal than a practical ml/ai goal.



It all hinges on your definition of "solved".

"Solved" in the AI/game theory has a very strict definition. It indicates that you have formally proven that one of the players can guarantee an outcome from the very beginning of the game.

The less-strict definition being thrown around here in the comments is more like "This AI can always beat this human because it is much stronger."


I think most people discussing this mean the later, less pedantic option. I mean, that’s the spirit of AI. Can we make it think like a human, or even more so. We are the yardstick.


That is a silly mis-use of the term and that is not being pedantic. A problem isn't solved just because you beat the existing solution (i.e. human players). As long as there is the potential for a better solution that can beat your solution there is work to be done.


You don't have to go through the complete search space if it turns out optimal strategies are sparse. What do I mean by that? Take a second-price auction: the dominant strategy here is to always bid your true value. Meanwhile, the search space for this would be any real number in between 0 and your true value. What does this mean for computational games like Chess or Go? It may mean while the search space is exponential, there may exist computationally trivial strategies that work. I would compare this to Kolmogorov complexity, except instead of having a program as your output, it's a strategy.


Any substandard statistical model fitted to by a simple computer program is superior to what an unaided human could achieve with pen and paper, but few of them can claim to practically "solve" the problem because they are better than crude fit heuristics proposed by humans who are not good calculating machines.

An algorithm can't claim to have "solved" Go, when future versions of the algorithm are expected to achieve vastly superior results, never mind any formal mathematical proof of optimality. What it has demonstrated is that humans aren't very good at Go. Given that Go involves estimating Nash equilibrium responses in a perfect information game with a finite, knowable but extremely large range of possible outcomes, it's perhaps not surprising that Go is the sort of problem that humans are not very good at trying to solve and that computers can incrementally improve on our attempted solutions. Perhaps the more interesting finding from AlphaGoZero is that humans were so bad at Go that not training on human games and theory actually improved performance.


We've just created a tool people can use to play Go better than a person without the tool. Until something emerges from the rain forest, or comes down from space that can throw down and win I'd say Humans are still the best Go players in the known universe.


That's like when the whole class fails a test, but the prof. grades on a curve. Someone gets an A, but not really. edit: some grammar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: