The main point is that skeptics in the 2000's were basically right about neural nets being of limited use, but other technologies advanced and broke down the barriers.
Lecun's original convolutional nets in 1998 were run on the then-gigantic dataset of 60,000 images. Consider that a company like Facebook can provide billions of images with some form of tagging, and you see the different world we live in.
There's a pretty fun/good course on one of the free MOOCs that pretty much follows the book and uses pacman as an example. It uses Python. It's the course that has a cute robot on the slides :)
Edit (found it, this one): https://www.edx.org/course/uc-berkeleyx/uc-berkeleyx-cs188-1...
Edit2: There's also this one but it's not the one I'm thinking of (it's also good though, Norvig is one of the authors of AI-AMA): https://www.udacity.com/course/cs271