Using clear and easy to understand language, Michael explains neural nets, the backprop algorithm, challenges in training these models, some commonly used modern building blocks and more:
This book opened my eyes to the power of textbooks written in such easy to understand, clear style. Bet it took repeated revisions, incorporating feedback from others and hours of work but such writing is a huge value add to the world.
Badmephisto == Andrej Karpathy?!
I would've never made the connection ... badmephisto also got me into speedcubing, my pb is ~14 sec, crazy ...
Not as educational but funny to see him getting owned in WoW.
Of course there are differences, once you choose a kernel the feature map is set in stone, on the otherhand DNNs can search over the space of feature maps. It is likely that the function that a DNN converged to was already in the Hilbert space associated with the kernel, but that does not mean one would converge to it in a finite amount of training data.
Once one goes to infinite dimensions all separable Hilbert spaces are the same (up to isomorphisms) but with finite data it matters what basis / kernel one chooses.
I guess it depends on whether you're willing to read a textbook's worth of Wikipedia articles. (There is no shortcut. Actual 5-year-olds also require a few years until they can understand advanced mathematics.)
Uses updated Keras and python and doesn’t go so much into the network connections itself.
I do regular trainings and teach seminars on neural networks and find most tutorials online go too in-depth regarding constructing a network from scratch (such as this one) - they lose people.
The biggest issue is actually data formatting, ingestion then hyperparameter tuning today. You really only need to grasp the basics to get started in 2019.