Hacker News new | past | comments | ask | show | jobs | submit login
Stanford Unsupervised Deep Learning Tutorial (2014) (stanford.edu)
401 points by espeed on Jan 9, 2017 | hide | past | favorite | 50 comments



I've gone through every exercise from previous version of the stanford tutorial a few year's ago, when it was called the UFLDL tutorial.

The sparse autoencoder (http://ufldl.stanford.edu/wiki/index.php/Exercise:Sparse_Aut...) exercise has been my favourite one, and I think one that is still relevant today, but became a somewhat ignored concept.


This is one of my 2017 resolution: learn more about AI/ML/DL. The field is so big I'm not sure where to start; I've started the Udacity AI class and started reading the "AI: A mordern approach book" from peter norvig.

Has anyone does that before and has any recommendations on where to start, good resources etc?


For Deep Learning, start with MNIST. The 1998 paper[1] describing LeNet goes into a lot more detail than more recent papers. Also there's an excellent video from Martin Gorner at Google that describes a range of neural networks for MNIST[2]. The source code used in his talk is excellent[3].

[1] http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf

[2] https://www.youtube.com/watch?v=vq2nnJ4g6N0

[3] https://github.com/martin-gorner/tensorflow-mnist-tutorial


Starting with DL? That's like learning calculus before geometry... YMMV?


Convolutional nets for digit recognition are certainly easier to learn than ML generally, but I wasn't suggesting to start with Deep Learning. I was suggesting that when studying Deep Learning, to start with MNIST.

It was easier for me to start with the LeCun 1998 paper than to watch all the theorem-proving in the online courses, but that's just personal preference.


Sure, but I take it the original comment wasn't exactly by someone with some ML background. And getting to grips with log likelihoods, (cross-) entropy, linear/logistic regression, evaluation metrics, and maybe even some Bayesian statistics might be rather helpful before jumping on the DL bandwagon.

While there are far too many hardcore statisticians and academics who love their theorems more than anything, not all classes are that way. I think I'd have loved it if I could have learned ML from today's MOOCs, instead of those theorem provers and formula speakers I had to deal with (and pass real-life exams you can't repeat every 8 hrs...)


I don't have an ML background and I had no problem understanding the LeCun 1998 paper. Naturally, the more ML one knows the better, I'm just encouraging people to dive in and try without getting intimidated.


Anecdotally, one astonishing observation I often make is that "breakthrough" papers [1] are nearly universally among the most accessible, clear and easy to follow. From Watson and Crick on DNA in MolBio, to Backpropagation by Hinton in ML, to Cox' survival model in Statistics, the most significant advances often tend to be the "easiest" to understand (in hindsight only, naturally).

[1] http://www.nature.com/news/the-top-100-papers-1.16224


thanks! definitely keep this for when I'm starting to get to DL!


Without doubt do the Andrew Ng course on Machine Learning.

https://www.coursera.org/learn/machine-learning

It's excellent.


I've started it already, seems like this one and the AI udacity course are the two best ones for now :) Thanks!


Another resource I've found really useful is course.fast.ai - it presents a very practical approach to deep learning in a way that would be particularly familiar to someone who has done any amount of programming.


I had taken the Ng course, which I recommend to get a basic understanding of ideas, but, the fast.ai course enables you to compete in Kaggle competitions in fairly short time.

Just watch the course.fast.ai intro/overview video, and decide from there.


This is a very well-written intro to Deep Learning: http://neuralnetworksanddeeplearning.com


I've been going through the online book. I've been enjoying the tutorials so far.


True, it is not so easy to figure out. I faced similar situation but I watched Udacity ML course and made an attempt.

Read my post:

"How I wrote my first Machine Learning program in 3 days"

http://blog.adnansiddiqi.me/how-i-wrote-my-first-machine-lea...


I think this is a great resource

https://www.r-bloggers.com/in-depth-introduction-to-machine-...

It doesn't cover deep learning at all, but contains material about many other ML techniques.


Why did you add the word "Unsupervised" when it wasn't contained in the original text?


This was precisely what made me click the link.


It is in the <title> of the article.

UFLDL Tutorial = Unsupervised Feature Learning & Deep Learning Tutorial.


Whoa you're right, there it is.


When is this from? The last paper in the reference is 2010.


I work in the field and I'm with aaronjg - this is ancient in the scheme of deep learning and very far from modern best practices. I honestly find it confusing when I see links like this hit the top of the page with such a strong number of upvotes. There is more modern and better run material now. Even if this is being referred to historically it should involve a timestamp.


Where would you recommend one to find the more modern and better material? Thanks.


For general introductory material in this style from Stanford, CS231n (fairly general but specialization in vision) and CS224d (specialization in DL for NLP) are great. The material for both of these are online for free and the video lectures (taken down due to legal challenges regarding accessibility) are available if you look hard enough ;)

If you're particularly after unsupervised deep learning, I'd recommend you do one or both of the above (or equivalent) and then read relevant recent papers.

http://cs231n.stanford.edu

http://cs224d.stanford.edu


A link to an equivalent type of page that teaches modern material would be helpful.


See my reply to technics256 and feel free to ask other questions :)


The latest GitHub commit was done 3 years ago. So the course is probably that old. However, I don't think the age matters. I am referring this tutorial in parallel with cs231n and it's so far been good. At least for Convnets.


I've been following this book[0] Grokking Deep Learning. This isn't finished yet but in active development. I like the style of explanation. As of now, I've learned how to create a very simple neural network, a neural network with three input vectors and a very simple deep neural network with three input vectors - 4 hidden size - 1 output. I'm still looking forward for the next MEAP releases and my goal is to understand image deep learning for me to apply on my day job regarding image processing.

[0] - https://www.manning.com/books/grokking-deep-learning


I've also been following this book and it seems as if progress has stalled. The last sent chapter is incomplete and half baked, there is no response from the author in the book forum and errors in previous chapters remain unchanged.


The author says there is a big update and is going on a review process.

https://twitter.com/iamtrask/status/818114151339950081


Good to know, thanks. Guess I'm a bit of a dinosaur for not following social networks :-)


Guys,

please stop posting this kind of old stuff. Or at least other guys - please stop upvoting this.

It's a very old tutorial, there are tons of better ones, in every aspect, today.

Somebody once smartly observed that the ratio of upvotes / comments is a good indicator if something is really good. If the ratio is very high it's a red flag. This article is a perfect example - nobody has anything interesting to say about it...

Maybe HN admins could take this into account maybe into the sorting algo.


On a side note, do does it feel to anyone else like this area is a bit of a moving target? Granted this isn't uncommon for topics experiencing a lot of growth, but it's a bit troubling to me that there seems to be a lack of reference publications that experts in the area can all agree on as a starting point of reference. I honestly don't feel like blogs or framework tutorials are a great replacement for this.

Also, I don't know of any other topic area where I would look at a resource that describes fundamental building blocks in an instructive way in 2014 and say "don't read this, it's irrelevant 2-3 years later". For languages/libraries/frameworks, sure. But for basic theory? That strikes me as very alien.


>Also, I don't know of any other topic area where I would look at a resource that describes fundamental building blocks in an instructive way in 2014 and say "don't read this, it's irrelevant 2-3 years later". For languages/libraries/frameworks, sure. But for basic theory?

Yeah. A lot of deep learning papers boil down to "we tried to use X architecture on Y dataset, and it seems to produce small error rate". I don't know of any other area of computer science where getting results from an algorithm without explaining how those results occur is publishable.


I agree, that makes it very opaque, but it seems that moocs and espially that one on coursera are continuously popular


What are those better tutorials? I'm interested.


I really liked those ones: - CS321 from Standford (outstanding material by Andrej Karpathy): http://cs231n.github.io/

- Hugo Larochelle videos: https://www.youtube.com/channel/UCiDouKcxRmAdc5OeZdiRwAg

- Michael Nielsen tutorial: http://neuralnetworksanddeeplearning.com/

- Chris Olah blog: http://colah.github.io/

- Keras blog: https://blog.keras.io/

There is also this one (not really a tutorial though): - Goodfellow, Bengio, Courville: http://www.deeplearningbook.org/

Also, there were a lot of 'Ask HN' topics - take a look.


I'm currently going through this which is to the point and easily understandable: http://neuralnetworksanddeeplearning.com/

My next step is https://www.tensorflow.org/tutorials/ as I want to move to TF with basics already in grasp.


the one that was really easy to grasp and gave me good intuition about the subject is https://karpathy.github.io/neuralnets/


>Maybe HN admins could take this into account maybe into the sorting algo.

It is, via exponential decay proprtional (among other) to that upvote/comment ratio. Can't find the source atm.

So I take that as an opportunity to comment without voting as long as I can't downvote, but also as caution of needles comments when I can only upvote so much.


Unfortunately it seems to me that most machine learning articles fit this criterion.


It's missing a link.

"This tutorial assumes a basic knowledge of machine learning ... go to this Machine Learning course [no link]"

Am I missing something?


Perhaps not exactly the right link, but Andrew Ng's Machine Learning course (also from Stanford) teaches exactly the required things in the first three weeks: https://www.coursera.org/learn/machine-learning


It's been a while since I did any serious maths, would I be lost in course like this ?



No. The basic linear algebra and calculus concepts used on the course are rather simple, and the videos hold your hand through the math quite well. You don't need deep understanding of the math to pass the course.


Probably not. I'm currently about halfway through and the majority of required mathematics revolves around linear algebra (matrix multiplication almost exclusively) and basic algebra. There is a linear algebra refresher as well.


I got 100% on the course and don't really do math myself. He skips over the derivations and gives the formula. The way you interact with the math is by turning a formula into code. It's actually fun and refreshing.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: