Unsupervised deep learning is the heart of what they are doing, and this is a relatively new method (first I saw it was at NIPS in Dec 2006). The 1980s were dominated by two layer supervised feedforward neural networks, which are quite different.
Correct, and an excellent point. Also, using unsupervised learning to process very complex input, that builds simpler models - and then using supervised learning on this simpler input is useful.
I wrote a commercial Go playing program in the late 1970s that was all procedural code. I have been thinking of hitting this problem again (on just a 9x9 board) using this combination of unsupervised learning to reduce the complexity of the input data, and then use a combination of supervised learning and some hand-written code.
Fast hardware was not available before to use deep neural networks (> 1 hidden layer) the last time I did Go playing programming. (There are also new monte carlo techniques that are providing really good results.)
Keep in mind that they are a supervised technique so they can't be compared to what google is doing.
80s style neural networks are flexible and powerful learners. Theoretically they can learn any function, and in practice they often come up with decent solutions. They aren't perfect - there is no guarantee they will find the best solution ('local minima'), and they operate as a black-box, meaning we can't properly interpret them.
I've heard the saying "80s style neural networks are usually the second best solution", which is oversimplified but close to correct.
Unsupervised deep learning is the heart of what they are doing, and this is a relatively new method (first I saw it was at NIPS in Dec 2006). The 1980s were dominated by two layer supervised feedforward neural networks, which are quite different.