I've found the math primer (two sections: "Linear Algebra" and "Probability and Information Theory") in this free book to be excellent so far: http://www.deeplearningbook.org/ It's a little under 50 pages for both sections.
I've seen the basics of linear algebra covered in many different places, and I think this is the most insightful yet concise intro I've come across. I haven't started the probability section yet, so I can't comment on it.
I also search for problems on the topic to help solidify my knowledge. You can almost always find a class that has posted problems for a section with answers.
Have people found any of them to be particularly outstanding? I'd be interested - and I suspect many HN readers would be - to hear recommendations.
Within a few hours of starting the course you'll have submitted an entry into the Kaggle Dogs vs Cats competition that scores in the top 50% of entries and achieves 97% accuracy. It's designed for coders who don't have a PHD in math. It's a very top-down approach, where you only get into mathematical details once you understand the high level models being used.
I also like Andrej Karpathy's thorough explanation for backprop in his Lecture 4 cs231n video. The links to the videos are removed from the course page  for some reason, just google "cs231n video" and you will find the youtube links. The page on CNN is pretty good.
This is accurate and also why I dropped out pretty quickly.
LAFF or the Andrew Ng Machine Learning courses are true semester courses, but I'm not actually sure this is.
This simplicity implies limits for advanced users, but the tool is fantastic to apprehend deep learning.
"To help more developers embrace deep-learning techniques, without the need to earn a Ph.D". Oh, good, I can do this.
"These fundamental concepts are taken for granted by many, if not most, authors of online educational resources about deep learning". Yup, true with this one as well.
Of course, 9 lines is a little dense, even with numpy. In practice, I got more understanding out of the slightly longer version that clocks in at 74 lines including comments and empty lines. This is an enormously simple neural network: a single layer with just 3 neurons. My son described its intelligence as being less than a cockroach after it'd been stepped on.
It works though. It's able to accurately guess the correct response for the trivial pattern it's given. You can follow the logic through so you understand each simple step in the process. In a follow up blog post, there's a slightly smarter neural network with a second layer and a mighty 9 neurons.
These examples are very approachable. It's about as simple a neural network as you can get. If you're new to machine learning, understand how it works helps illuminate the more sophisticated networks described in Martin Görner's presentation.
PS: More generally, is there a guide that explains how to robustly encode real numbers as output of neurons? I've tried to search for it, but couldn't find it.
But I was hoping for a more scientific answer. Like how do researchers approach this problem typically? And is there a strong consensus in this area among researchers?
It seems like such a general problem.
Fancy math is useful for explaining why it works.
But this sort of content is good for explaining to engineers -how- it works. Which is ultimately how I need to understand things before the why is interesting to me.
There's a tensorflow R port, but it requires setting up a working version of tensorflow on python. So once you have that set up, all the documentation, error messages etc are for python - so you might as well just use python.
Thx for the info.