
Geoffrey Hinton: Introduction to Deep Learning, Deep Belief Nets (2012) [video] - mindcrime
https://www.youtube.com/watch?v=GJdWESd543Y
======
rudyl313
Unfortunately this talk is kind of dated already. Most people don't stack RBMs
or autoencoders to pretrain the weights anymore. If you use dropout with
rectified linear units, you don't have to pretrain, even for large
architectures.

~~~
bradneuberg
It's not just ReLUs that have helped, its also better random initialization
before starting training, such as using Xavier initialization
([http://andyljones.tumblr.com/post/110998971763/an-
explanatio...](http://andyljones.tumblr.com/post/110998971763/an-explanation-
of-xavier-initialization))

Also, batch normalization helps with convergence as well. In addition, LSTMs
work when dealing with recurrent neural nets.

------
bradneuberg
I completed this course over the last year. It's fantastic and from one of the
founders of the field.

I'd stick to the first half to get a good sense of back propagation and
working with standard neural nets. I'd hold off on the second half, which
delve more into Restricted Boltzmann Machines (RBMs) or autoencoders; these
aren't used as much anymore.

To augment your education for things that have happened since 2012, I'd learn
about ReLUs rather than sigmoids for activation values, as well as studying up
on convolutional neural networks (CNNs) and the recent work in sequence-to-
sequence NLP translation via neural networks.

------
hoaphumanoid
His coursera lectures are awesome

------
king_magic
This looks fantastic. Exactly the kind of intro for deep learning I've been
looking for.

