The sparse autoencoder (http://ufldl.stanford.edu/wiki/index.php/Exercise:Sparse_Aut...) exercise has been my favourite one, and I think one that is still relevant today, but became a somewhat ignored concept.
Has anyone does that before and has any recommendations on where to start, good resources etc?
It was easier for me to start with the LeCun 1998 paper than to watch all the theorem-proving in the online courses, but that's just personal preference.
While there are far too many hardcore statisticians and academics who love their theorems more than anything, not all classes are that way. I think I'd have loved it if I could have learned ML from today's MOOCs, instead of those theorem provers and formula speakers I had to deal with (and pass real-life exams you can't repeat every 8 hrs...)
Just watch the course.fast.ai intro/overview video, and decide from there.
Read my post:
"How I wrote my first Machine Learning program in 3 days"
It doesn't cover deep learning at all, but contains material about many other ML techniques.
UFLDL Tutorial = Unsupervised Feature Learning & Deep Learning Tutorial.
If you're particularly after unsupervised deep learning, I'd recommend you do one or both of the above (or equivalent) and then read relevant recent papers.
 - https://www.manning.com/books/grokking-deep-learning
please stop posting this kind of old stuff. Or at least other guys - please stop upvoting this.
It's a very old tutorial, there are tons of better ones, in every aspect, today.
Somebody once smartly observed that the ratio of upvotes / comments is a good indicator if something is really good. If the ratio is very high it's a red flag.
This article is a perfect example - nobody has anything interesting to say about it...
Maybe HN admins could take this into account maybe into the sorting algo.
Also, I don't know of any other topic area where I would look at a resource that describes fundamental building blocks in an instructive way in 2014 and say "don't read this, it's irrelevant 2-3 years later". For languages/libraries/frameworks, sure. But for basic theory? That strikes me as very alien.
Yeah. A lot of deep learning papers boil down to "we tried to use X architecture on Y dataset, and it seems to produce small error rate". I don't know of any other area of computer science where getting results from an algorithm without explaining how those results occur is publishable.
- Hugo Larochelle videos: https://www.youtube.com/channel/UCiDouKcxRmAdc5OeZdiRwAg
- Michael Nielsen tutorial: http://neuralnetworksanddeeplearning.com/
- Chris Olah blog: http://colah.github.io/
- Keras blog: https://blog.keras.io/
There is also this one (not really a tutorial though):
- Goodfellow, Bengio, Courville: http://www.deeplearningbook.org/
Also, there were a lot of 'Ask HN' topics - take a look.
My next step is https://www.tensorflow.org/tutorials/ as I want to move to TF with basics already in grasp.
It is, via exponential decay proprtional (among other) to that upvote/comment ratio. Can't find the source atm.
So I take that as an opportunity to comment without voting as long as I can't downvote, but also as caution of needles comments when I can only upvote so much.
"This tutorial assumes a basic knowledge of machine learning ... go to this Machine Learning course [no link]"
Am I missing something?
And this one on Machine Learning: