
Deep Dive into Math Behind Deep Networks - headalgorithm
https://towardsdatascience.com/https-medium-com-piotr-skalski92-deep-dive-into-deep-networks-math-17660bc376ba
======
m0zg
That's not a "deep dive" that's the disappointingly barest minimum.

------
nafizh
I would like to see someone explain the math behind recurrent neural networks.
Feed-forward neural networks are fairly straight-forward, and there are many,
many blog posts explaining them already.

~~~
jing
I think this resource could be helpful:

[http://colah.github.io/posts/2015-08-Understanding-
LSTMs/](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)

Essentially, RNNs and feed forward networks are very similar - RNNs are just
"unrolled through time" and every timestep shares the same weights. The
activations are slightly different as well, but the core concept is the same
as feed forward networks; it's not a completely different concept or idea.

------
platz
Backpropagation section was just a list of formulas, and "it's because of the
chain rule".

------
mjfl
isn't it just gradient descent?

~~~
ousta
yes just like a big chunk of what ML is.

