Sequence predicting RNNs are basically unsupervised, in that they can learn from lots raw of unlabelled data. And they learn useful internal representations which can be adapted for other tasks. There is lots of old work on unsupervised learning rules for RNNs, including recurrent autoencoders and history compression.