I would like to see someone explain the math behind recurrent neural networks. Feed-forward neural networks are fairly straight-forward, and there are many, many blog posts explaining them already.
Essentially, RNNs and feed forward networks are very similar - RNNs are just "unrolled through time" and every timestep shares the same weights. The activations are slightly different as well, but the core concept is the same as feed forward networks; it's not a completely different concept or idea.