I just love how these DNN researchers love to bash prior work as over-hyped, while hyping their own research through the roof.
AI researchers did some amazing stuff in the 60s and 80s, considering the hardware limitations they had to work under.
>"AT the core, it's just one simple idea of a neural network."
Not really. First neural networks were done in the 50s. Didn't produce any particularly interesting results. Most of the results in the video are a product of fiddling with network architectures, plus throwing more and more hardware at the problem.
Also, none of the architectures/algorithms used by deep learning today are more general than, say, pure MCTS. You adapt the problem to the architecture, or architecture to the problem, but the actual system does not adapt itself.
It's not like there is a single "neural" architecture that's getting better and better. There are dozens of different architectures with their own optimizations, shortcuts, functions and parameters.