Prof. Sergey Levine is REALLY good at explaining the intuitions of DL algorithms. This class also includes lectures on ML basics and very approachable assignments.
Many classes/blog posts start with describing what a neuron is - that IMHO is a super terrible way to teach a beginner.
To understand DL, one should know why we need activations (because linear models are not enough), why we need back-propagation (because we are optimizing a loss using SGD). This class is very great at explaining those things in an intuitive way. Following through I felt I built a pretty solid ML/DL foundation for myself.
The youtube playlist is here: https://www.youtube.com/playlist?list=PL_iWQOsE6TfVmKkQHucjP...
Prof. Sergey Levine is REALLY good at explaining the intuitions of DL algorithms. This class also includes lectures on ML basics and very approachable assignments.
Many classes/blog posts start with describing what a neuron is - that IMHO is a super terrible way to teach a beginner.
To understand DL, one should know why we need activations (because linear models are not enough), why we need back-propagation (because we are optimizing a loss using SGD). This class is very great at explaining those things in an intuitive way. Following through I felt I built a pretty solid ML/DL foundation for myself.