Hacker News new | past | comments | ask | show | jobs | submit login

Highly optimised single-function algorithms like this are impressive stuff and can lead to useful tools, but that's it. This gets us no closer to strong AI than a tic tac toe program. Until we have systems that can tackle a wide range of fundamentally different problems and independently adapt strategies for dealing with one class of problems to deal with other classes of problems, systems like Alphago will remain one trick wonders with little relevance to 'true' AI.

Edit: I do understand that the techniques used to implement Alphago can be used to implement other single-function solvers. That doesn't make it a general purpose strong AI.




Welcome to the AI effect! Every time AI makes an accomplishment, it is disregarded. The goalposts are perpetually moved. "AI is whatever computers can't do yet."

People said for years that Go would never be beaten in our lifetime. They said this because Go has a massive search space. It can't be beaten by brute force search. It requires intelligence, the ability to learn and recognize patterns.

And it requires doing that at the level of a human. A brute force algorithm can beat humans by doing a stupid thing far faster than a human can. But a pattern recognition based system has to beat us by playing the same way we do. If humans can learn to recognize a specific board pattern, it also has to be able to learn that pattern. If humans can learn a certain strategy, it also has to be able to learn that strategy. All on it's own, through pattern recognition.

And this leads to a far more general algorithm. The same basic algorithm that can play Go, can also do machine vision, it can compose music, it can translate languages, or it could drive cars. Unlike the brute force method that only works one one specific task, the general method is, well, general. We are building artificial brains that are already learning to do complex tasks faster and better than humans. If that's not progress towards AGI, I don't know what is.


The "moving goalposts" argument is one that really needs to die. It's a classic empty statement. Just because other people made <argument> in the past does not mean it's wrong. It proves nothing. People also predicted AGI many times over-optimistically; probably just as often as people have moved goalposts.


I don't know what you are trying to say. I'm making an observation that whenever AI beats a milestone, there are a bunch of pessimists that come out and say "but obviously X was beatable by stupid algorithms. I will beleive AI is making progress when it beats Y!"

Those arguments absolutely are wrong. For one thing it's classic hindsight bias. When you make a wrong prediction, you should update your model, not come up with justifications why your model doesn't need to change.

But second, it's another bias, where nothing looks like AI, or AI progress. People assume that intelligence should be complicated, that simple algorithms can't have intelligent behavior. That human intelligence has some kind of mystical attribute that can't be replicated in a computer.


I said exactly what I said. Calling out "moving the goalposts" does not refute the assertion that this does not get us nontrivially closer to AGI.

Whenever AI beats a milestone, there are a bunch of over-optimists that come out and make predictions about AGI. They have been wrong over and over again over the course of half a century. It's classic hindsight bias.


Yes it does! If you keep changing what you consider "AI", every time it makes progress, then it looks like we are never getting closer to AI. When in fact it is just classic moving goalposts.

And the optimists are being proven right. AGI is almost here.


This doesn't address my argument at all.


As far as I know the goal post of Turing test has never moved.


That's because it hasn't been beaten yet! As soon as a chatbot beats a turing test, there will be a lot of AI deniers come out and say that the Turing test doesn't measure 'real' intelligence, or isn't a valid test for <reasons>.

I know this is true, because there are already a lot of people that think the Turing test isn't valid. They believe it could be beaten by a stupid chatbot, or deception on the part of AI. Just search for past discussions on HN of the Turing test, it comes up a lot.

There is no universally accepted benchmark of AI. Let alone a benchmark for AI progress, which is what Go is.

No one claimed that Go would require a human level AI to beat. But I am claiming that beating it represents progress towards that goal. Whereas passing the Turing test won't happen until the very end. Beating games like Go are little milestones along the journey.


Viewed that way, I'll accept that beating Go represent progress. That's not the same as saying that it represents imminent evidence that singularity style strong AI is almost upon us, as suggested in the post I was replying to. In the long term it might turn out to represent very minimal progress towards that goal.


Chatbots can already beat the turing test.


Chatbots ate trivially easy to beat. Just try teaching it a simple game and ask it to play it with you. Basically any questions that require it to form a mental model of something and mutate or interrogate the model state.

Many of the chatbot Turing test competitions have heavily rigged rules restricting the kinds of questions you're allowed to ask in order to give the bots a chance.


Only for bad judges. Just ask any AI - 'what flavour do you reckon a meteorite is' or something weird like that, and watch it try and equivocate.

(the answer is Rocky Road by the way)


In case you're not aware, AlphaGo's key component is based on the same type of Deepmind system that learned to play dozens of Atari games, to superhuman levels, by watching the pixels, without any programmatic adaptation to the particular Atari game. At least the version of AlphaGo that played in October was far less specialized for Go than Deep Blue was for chess. Demis Hassabis says that next up after this is getting Deepmind to play Go without any programmatic specialization for Go. Your reply would be appropriate if we were talking about Deep Blue, chess, and 1997.


That's incorrect. The features that AlphaGo uses are not pixel level features, but board states - and the architecture between AlphaGo and the Atari network is completely different.

It's still an incredibly achievement - but it's important to be accurate.


For AlphaGo, a "pixel" is a point on the board. It uses essentially the same convolutional neural networks (CNNs) that are in state-of-the-art machine vision systems. But yes, the overall architecture is rather different from the Atari system, due to the integration of that CNN with Monte Carlo Tree Search.


Sorry, you're off base a bit. The Atari system did use a Deep Neural Network / Reinforcement algorithm, but as the original poster was trying to point out, the rules of Go were very much hard coded into AlphaGo. From what this [1] says, multiple DNNs are learning how to traverse Monte Carlo trees of Go games. The reinforcement piece comes in choosing which of the Go players is playing the best games.

While the higher portions do share some similarities with the Atari system, at a basic level this is a machine that was designed and trained to play Go. AlphaGO is 'essentially the same' as the Atari system in the same way that all Neural Networks are 'essentially the same.'

Is this an extremely impressive accomplishment? Yes. However, doesn't seem to qualify as anything close to generalizable.

[1] http://googleresearch.blogspot.com/2016/01/alphago-mastering...


I didn't say AlphaGo is essentially the same as Deep Q Networks. I said the convolutional neural network part of it is essentially the same. We agree that the integration of that CNN into the rest of the system is very different.


It's best to say that alphago uses neural networks, which are extremely general. The same way planes and cars both use internal combustion engines. ICEs are extremely general. They produce mechanical energy from gas, and are totally uncaring whether you put them into a plane or a car. The body of the plane is necessary, but isn't really the interesting part.

Likewise NNs are uncaring what application you put them into. Give them a different input and a different goal, and they will learn to do that instead. Alphago gave it's NN's control over a monte carlo search tree, and that turned out to be enough to beat Go. They could plug the same AI into a car and it would learn to control that instead.

Note that even without the monte carlo search system, it was able to beat most amateurs, and predict the moves experts would make most of the time.


Even without the neural net system, AI is able to beat most amateurs, and predict moves experts would make.


I'm not sure that's correct. MCTS has well known weaknesses, and isn't even a predictive algorithm. MCTS on it's own couldn't get anywhere near beating the top Go champion, that requires deepminds neural networks.


http://www.milesbrundage.com/blog-posts/alphago-and-ai-progr...

The best Go program before AlphaGo was CrazyStone, ranked at 5-dan ("high amateur" range).


There's a massive skill difference between ameatures and professionals. It couldn't even beat the top ameatures.


Which is why I said the intuition was amateur-pro level. I did not say it could beat every amateur-pro in the world.


Actually, putting a piece of software in front that infers board states from a video feed would be an easy problem.


That's actually true - going from the pixel level to the board state is trivial and not particularly interesting.


It's trivial today. It would have been interesting perhaps twenty years ago?


No, because even handcrafted computer vision systems from 20 years ago would be able to parse a Go board (edge detection + check the color contrast).


True. I guess you'd have to go even further back.


>> In case you're not aware, AlphaGo's key component is based on the same type of Deepmind system that learned to play dozens of Atari games, to superhuman levels, by watching the pixels, without any programmatic adaptation to the particular Atari game.

The Atari-playing AI watched the pixels indeed, but it was also given a set of actions to choose from and more importantly, a reward representing the change in the game score.

That means it wasn't able to learn the significance of the score on its own, or to generalise from the significance of the changing score in one game, to another.

It also played Atari games, that _have_ scores, so it would have been completely useless in situations where there is no score, or a clear win/loss situation.

AlphaGo is also similarly specialised to play Go. As is machine learning in general: someone has to tell the algorithm what it needs to learn, either through data engineering, or reward functions etc. A general AI would learn what is important on its own, like humans do, so machine learning has not yet shown that it can develop into AGI.


I think you are confusing utility functions with intelligence. All AIs need utility functions. An AI without a utility function would just do nothing. It would have no reason to beat Atari games, because it wouldn't get any reward for doing so.

Even humans have utility functions. For example, we get rewards for having sex, or eating food, or just making social relationships with other humans. Or we have negative reinforcement from pain, and getting hurt, or getting rejected socially.

You can come up with more complicated utility functions. Like instead of beating the game, it's goal could be to explore as much of the game as possible. To discover novel things in the game. Kind of like a sense of boredom or novelty that humans have. But in the end it's still just a utility function, it doesn't change how the algorithm itself works to achieve it. AGI is entirely agnostic to the utility function.


>> I think you are confusing utility functions with intelligence.

No, what I'm really saying is that you can't have an autonomous agent that needs to be told what to do all the time. In machine learning, we train algorithms by giving them examples of what we want them to learn, so basically we tell them what to learn. And if we want them to learn something new, we have to train them again, on new data.

Well, that's not conducive to autonomous or "general" intelligence. There may be any number of tasks that your "general" AI will need to perform competently at. What's it gonna do? Come back to you and cry every time it fails at something? So then you have a perpetual child AI that will never stand on its own two feet as an adult, because there's always something new for it to learn. Happy little AI, for sure, but not very useful and not very "general". Except for a general nuisance, maybe.

Edit: I'm saying that machine learning can't possibly lead to general AI, because it's crap at learning useful things on its own.


Machine learning doesn't "need to be told what to do all the time". No one told alphaGo what strategies were the best. It figured that out on it's own, by playing against itself.

There is also unsupervised and semi-supervised learning, which can take advantage of unlabelled data. Even supervised learning can work really well on weakly labelled data. E.g. taking pictures from the internet and using the words that occur next to them as labels. As opposed to hiring a person to manually label all of them.

I don't know what situation you are imagining that would make the AI "come back and cry". You will need to give an example.


>> Machine learning doesn't "need to be told what to do all the time". No one told alphaGo what strategies were the best.

Of course they did. They trained it with examples of Go games and they also programmed it with a reward function that led it to select the winning games. Otherwise, it wouldn't have learned anything useful.

>> There is also unsupervised and semi-supervised learning, which can take advantage of unlabelled data.

Sure, but unsupervised learning is useless for learning specific behaviours. You use it for feature discovery and data exploration. As to semi-supervised learning, it's "semi" supervised: it learns its own features, then you train it with labels so that it learns a mapping from those features it discovered to the classes you want it to output.

>> I don't know what situation you are imagining that would make the AI "come back and cry"

That was an instance of humour [1].

[1] https://en.wikipedia.org/wiki/Humour


>Of course they did. They trained it with examples of Go games and they also programmed it with a reward function that led it to select the winning games. Otherwise, it wouldn't have learned anything useful.

Yes, but it doesn't need to be trained with examples of Go games. It helps a lot, but it isn't 100% necessary. It can learn to play entirely through self play. The atari games were entirely self play.

As for having a reward function for winning games, of course that is necessary. Without a reward function, any AI would cease to function. That's true even of humans. All agents need reward functions. See my original comment.

>That was an instance of humour

Yes I know what humour is lel. I asked you for a specific example where you think this would matter. Where your kind of AI would do better than a reinforcement learning AI.


>> The atari games were entirely self play.

That's reinforcement learning and it's even more "telling the computer what to do" than teaching it with examples.

Because you're actually telling it what to do to get a reward.

>> Without a reward function, any AI would cease to function.

I can't understand this comment, which you made before. Not all AI has a reward function. Specific algorithms do. "All" AI? Do you mean all game-playing AI? Even that's stretching it, I don't remember minimax being described in terms of rewards say, and I certainly haven't heard any of about a dozen classifiers I've studied and a bunch of other systems of all sorts (not just machine learning) being described in terms of rewards either.

Unless you mean "reward function" as the flip side of a cost function? I suppose you could argue that- but could you please clarify?

>> your kind of AI

Here, there's clearly some misunderstanding because even if I have a "my kind" of AI, I didn't say anything like that.

I'm sorry if I didn't make that clear. I'm not trying to push some specific kind of AI, though of course I have my preferences. I'm saying that machine learning can't lead to AGI, because of reasons I detailed above.


>That's reinforcement learning and it's even more "telling the computer what to do" than teaching it with examples.

No one tells the computer what to do. They just let it do it's thing, and give it a reward when it succeeds.

>Not all AI has a reward function. Specific algorithms do. "All" AI?

Fine, all general AI. Like game playing etc. Minimax isn't general, and it does require a precise "value function" to tell it how valuable each state is. Classification also isn't general, but it also requires precise loss function.


>> No one tells the computer what to do.

Sure they do. Say you have a machine learning algorithm, that can learn a task from examples, and let's notate it like so:

y = f(x)

Where y is the trained system, f the learning function and x the training examples.

The "x", the training examples, is what tells the computer what to learn, therefore, what to do once it's trained. If you change the x, the learner can do a different y. Therefore, you're telling the computer what to do.

In fact, once you train a computer for a different y, it may or may not be really good at it, but it certainly can't do the old y anymore. Which is what I mean by "machine learning can't lead to AGI". Because machine learning algorithms are really bad at generalising from one domain to another, and the ability to do so is necessary for general intelligence.

Edit: note that the above has nothing to do with supervised vs unsupervised etc. The point is that you train the algorithm on examples, and that necessarily removes any possibility of autonomy.

>> Fine, all general AI. Like game playing etc.

I'm still not clear what you're saying; game-playing AI is not an instance of general AI. Do you mean "general game-playing AI"? That too doesn't always necessarily have a reward function. If I remember correctly for instance, Deep Blue did not use reinforcement learning and Watson certainly does not (I got access to the Watson papers, so I could double-check if you doubt this).

Btw, every game-playing AI requires a precise evaluation function. The difference with machine-learned game-playing AI is that this evaluation function is sometimes learned by the learner, rather than hard-coded by the programmer.


The thing about neural networks is they can generalize from one domain to another. We don' have a million different algorithms, one for recognizing cars, and another for recognizing dogs, etc. They learn features that both have in common.

>The "x", the training examples, is what tells the computer what to learn, therefore, what to do once it's trained. If you change the x, the learner can do a different y. Therefore, you're telling the computer what to do.

But with RL, a computer can discover it's own training examples from experience. They don't need to be given to it.

>I'm still not clear what you're saying; game-playing AI is not an instance of general AI.

But it is! The distinction between the real world and a game is arbitrary. If an algorithm can learn to play a random video game, you can just as easily plug it into a robot and let it play "real life". The world is more complicated, of course, but not qualitatively different.


1) this isn't a single function algorithm 2) the human mind is FULL of ugly, highly optimized hacks that accomplish one thing well enough for us to survive. Don't assume that human intelligence is this magical general intelligence, rather than a collection of single function algorithms.


It is. Value and policy networks are nonlinear approximators for value and policy functions.

You're making the mistake of assuming anything about how the human brain learns.


'True AI will always be defined as anything a computer can not yet do'


Sort-of repeating a comment I made last time AlphaGo came up:

As far as I know there is nothing particularly novel about AlphaGo, in the sense that if we stuck an AI researcher from ten years ago in a time machine to today, the researcher would not be astonished by the brilliant new techniques and ideas behind AlphaGo; rather, the time-traveling researcher would probably categorize AlphaGo as the result of ten years' incremental refinement of already-known techniques, and of ten years' worth of hardware development coupled with a company able to devote the resources to building it.

So if what we had ten years ago wasn't generally considered "true AI", what about AlphaGo causes it to deserve that title, given that it really seems to be just "the same as we already had, refined a bit and running on better hardware"?


It's easy to say that, in the same way that people now wouldn't be surprised if we could factor large numbers in linear time if we had a functional quantum computer!!

10 years ago no one believed it was possible to train deep nets[1].

It wasn't until the current "revolution" that people learned how important parameter initialization was. Sure, it's not a new algorithm, but it made the problem tractable.

So far as algorithmic innovations go, there's always ReLU (2011) and leaky ReLU (2014). The one-weird-trick paper was pretty important too.

[1] Training deep multi-layered neural networks is known to be hard. The standard learning strategy— consisting of randomly initializing the weights of the network and applying gradient descent using backpropagation—is known empirically to find poor solutions for networks with 3 or more hidden layers. As this is a negative result, it has not been much reported in the machine learning literature. For that reason, artificial neural networks have been limited to one or two hidden layers

http://deeplearning.cs.cmu.edu/pdfs/1111/jmlr10_larochelle.p...


Dropout (and maxout) might also count.


It's only difficult because no one threw money at it. It's like saying going to Mars is difficult. It is - but most of the technology is there already, just need money to improve what was used to go to the moon.

If you asked people 10 years ago before the moon landing if it was possible, I too would agree it's impossible. But after that breakthrough it opened up the realm is possibilities.

I see AlphaGo more of an incremental improvement than a breakthrough.


It's a basic human bias to believe that anything that you don't have to do (or know how to do) "just needs money" to get done.


So are you arguing that superhuman-level performance in just a matter of engineering effort? Or am I missing something?

I'm generally considered to be way over optimistic in my assessment of AI progress. But wow.. that's pretty optimistic!


I interpreted him as saying superhuman-performance at Go was just a matter of engineering effort, which I wholeheartedly agree with.


Agreed. People don't realize that all of the huge algorithmic innovations (LSTMs, Convolutional neural networks, backpropagation) were invented in past neural net booms. I can't think of any novel algorithms of the same impact and ubiquity (e.g. universally considered to be huge algorithmic leaps) that have been invented in this current boom. The current boom started due to GPUs.


Something being invented previously doesn't mean that it existed as a matter of engineering practicality; improved performance is some but not all of that. Just describing something in a paper isn't enough to make it have impact, many things described in papers simply don't work as described.

A decade ago I was trying and failing to build multi-layer networks with back-propagation-- it doesn't work so well. More modern, refined, training techniques seem to work much better... and today tools for them are ubiquitous and are known to work (especially with extra CPU thrown at them :) ).


Backpropagation and convolutional neural nets were breakthroughs that were immediately put to use.


The point is that no one could train deep nets 10 years ago. Not just because of computing power, but because of bad initializations, and bad transfer functions, and bad regularization techniques, etc.

These things might seem like "small iterative refinements", but they add up to 100x improvement. Even when you don't consider hardware. And you should consider hardware too, it's also a factor in the advancement of AI.

Also reading through old research, there is a lot of silly ideas along with the good ones. It's only in retrospect that we know this specific set of techniques work, and the rest are garbage. At the time it was far from certain what the future of NNs would look like. To say it was predictable is hindsight bias.


They could. There was a different set of tricks that didn't work as well (greedy pretraining).


Lots of people tried and failed.

Today lots of people-- ones with even less background and putting in less effort-- try and are successful.

This is not a small change, even if it is the product of small changes.


Reconnecting to my original point way up-thread, my point is these "innovations" have not substantially expanded the types of models we are capable of expressing (they have certainly expanded the size of the types of models we're able to train), not nearly to the same degree as backprop/convnets/LSTMs did way back decades ago (this is important because AGI will require several expansions in the types of models we are capable of implementing).


Right, LSTM was invented 20 years ago. 20 years from now, the great new thing will be something that has been published today. It takes time for new innovations to gain popularity and find their uses. That doesn't mean innovations are not being made!


Dropout and deep belief networks are significant recent algorithmic advances that are already widely used.


> As far as I know there is nothing particularly novel about AlphaGo,

By that standard there's nothing particularly novel about anything. Everything we have today is just a slight improvement of what we already had yesterday.

World experts in go and ML as recently as last year thought it would be many more years before this day happened. Who are you to trivialize this historic moment?


Some experts in Go less than 10 years ago believed it would be accomplished within 10 years. Also, you didn't actually refute his argument. Can you point to an algorithm that is not an incremental improvement over algorithms that existed 10 years ago? MCTS and reinforcement learning with function approximators definitely existed 20 years ago.


No, that's what they're saying. Take any invention and you can break it down into just a slight improvement of the sub-inventions it consists of.

A light bulb is just a metal wire encased in a non-flammable gas and you run electricity through it. It was long known that things get hot when you run electricity through them, and that hot things burst into fire, and that you can prevent fire by removing oxygen, and that glass is transparent. It's not a big deal to combine these components. A lot of people still celebrate it as a great invention, and in my opinion it is! Think about how inconvenient gas lighting is and how much better electrical light is.

Same thing with AlphaGo. Sure, if you break it down to its subcomponents it's just clever application of previously known techniques, like any other invention. But it's the result that makes it cool, not how they arrived at it!

All algorithms are incremental improvements of existing techniques. This isn't a card you can use to diminish all progress as "just a minor improvement what's the fuss".


No, not all inventions are incremental improvements of existing techniques. Backpropagation and convolutional nets, for example. Now, you might counter with the fact that it's just the chain rule (and convolution existed before that), but the point is it that algorithm had never been used in machine learning before.

People have used neural nets as function approximators for reinforcement learning with MCTS for game playing well before AlphaGo (!!).

Your lightbulb example actually supports my point. The lightbulb was the product of more than a half-century of work by hundreds of engineers/scientists. I have no problem with pointing to 70 years of work as a breakthrough invention.


I think the thing that would surprise a researcher from ten years ago is mainly the use of graphics cards for general compute. The shader units of 2005 would only be starting to get to a degree of flexibility and power where you could think to use them for gpgpu tasks.


I got my first consumer gpu in 1997 and was thinking about how to do nongraphical tasks on it almost immediately. I didn't come up with anything practically useful back then and they were much more limited but I don't think someone from 2006 would find it surprising to hear that this was a thing.


I don't know... CUDA was released almost 9 years ago. So I don't think it's a stretch to suggest that cutting edge researchers from 10 years ago would have been thinking about using GPU's that way.


Human mind... pshaw. You say this "human" thing is special? I don't see it: In my day we also had protons and electrons... all you're showing me is another mishmash of hadroic matter and leptons. So you've refined the arrangement here and there, okay, but I don't see any fundamental breakthrough only incremental refinement.


The effectiveness? Ten years ago researchers thought humanlike intelligence must necessarily involve something more than simple techniques. Today that position becomes a lot harder to defend.


You are not a general purpose strong AI either. Your mind could easily consist of a whole bunch of single-function solvers, combined into networked components.

See this interview between Kurzweil and Minsky: https://www.youtube.com/watch?v=RZ3ahBm3dCk#action=share


Regarding your edit, I'd wager that it won't stay true for long. Eventually the single-function problem of orchestrating and directing a bunch of sub-solvers in a similar manner to the human brain will become feasible. At that point true general purpose AI will exist, for all intents and purposes.


And when that happens I guarantee someone will be on HN saying it is "trivial" and "not a real advancement" and "we already basically knew how to do this". Because that's what people say every time something interesting happens in AI, without fail.


I don't think anyone's claiming that AlphaGo is an AGI in and of itself, just that it's a significant step towards one. There's still a lot to go before we can toss a standardized piece of hardware+code into an arbitrary situation and have it 'just figure it out'.


> This gets us no closer to strong AI than a tic tac toe program.

We don't know that, actually. Maybe GAI isn't one shining simple algo, but cobbling together a bunch of algorithms like this one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: