Hacker News new | past | comments | ask | show | jobs | submit login

>"Prior to 2012, AI was a field of broken promises."

I just love how these DNN researchers love to bash prior work as over-hyped, while hyping their own research through the roof.

AI researchers did some amazing stuff in the 60s and 80s, considering the hardware limitations they had to work under.

>"AT the core, it's just one simple idea of a neural network."

Not really. First neural networks were done in the 50s. Didn't produce any particularly interesting results. Most of the results in the video are a product of fiddling with network architectures, plus throwing more and more hardware at the problem.

Also, none of the architectures/algorithms used by deep learning today are more general than, say, pure MCTS. You adapt the problem to the architecture, or architecture to the problem, but the actual system does not adapt itself.

So, they didn't have backprop and automatic differentiation in the 50s. That's pretty fundamental and not just "fiddling with architectures"

This statement is fairly inaccurate. If you check Peter Madderom 1966 thesis, you'll see that it states the earliest work on automatic differentiation was done in the 1950s. It's just that back then, it was called Analyitc differentiation. You can see many of the key ideas already existed back then, including research into specializations for efficiently applying the chain rule.


Ah, you're right on AD. But backprop was invented in the 80s

But being fundamental in this context is a bad thing.

It's not like there is a single "neural" architecture that's getting better and better. There are dozens of different architectures with their own optimizations, shortcuts, functions and parameters.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact