Hacker News new | past | comments | ask | show | jobs | submit login
Deep Learning for NLP without Magic (2013) (socher.org)
57 points by jbarrow on Aug 18, 2014 | hide | past | web | favorite | 6 comments



This tutorial is excellent, and Richard Socher has been doing some really fascinating (though complicated) work with deep learning for NLP. I also have a few other good references for people who want to investigate this area.

One of the most insightful comments I have seen about deep learning for NLP is a reddit r/machinelearning comment seen in this post [1]. As the poster in that thread mentioned, much of the theory about using neural networks for NLP tasks was laid out pretty clearly in a paper by Yoshua Bengio from 2003 [2]. At a high level, the basic learner is very similar to many of the other deep architectures for images and audio.

There is also a pretty great blog post [3] by Radim Rehurek (author of gensim) about how he implemented word2vec (a "deep learning"-style model for text) in Python, while also getting a performance improvement over the C version!

[1] http://www.reddit.com/r/MachineLearning/comments/2doi7e/what...

[2] http://jmlr.org/papers/v3/bengio03a.html

[3] http://radimrehurek.com/2014/02/word2vec-tutorial/


I would really like to do some neural network stuff, but i really have trouble reading the math. For example, they have a maxent classifier in the slides but I'd have no idea how to convert that to code.

Any resources to learn that stuff?


There is a nice tutorial here [1] about max-ent classifiers. Ultimately in neural networks there are usually a number of cost functions you can use for the last layer - softmax or cross-entropy are other possibilities that may be easier to understand, though possibly less performant for NLP tasks.

I thought the introduction to neural networks from Andrew Ng's coursera course (even though it meant writing MATLAB) was quite good, and allows you to implement backprop, cost functions, etc. while still having some other helper code to make things easier. I highly recommend working through that course if you are intersted in ML in general [2].

[1] http://www.cs.berkeley.edu/~klein/papers/maxent-tutorial-sli...

[2] https://www.coursera.org/course/ml


You can check out Machine Learning: An Algorithmic Perspective by Stephen Marsland[0] which takes a less math-driven approach to ML. The code is available online if you want to take a look at it (it's written in Python)[1].

[0]: http://www.crcpress.com/product/isbn/9781420067187

[1]: http://seat.massey.ac.nz/personal/s.r.marsland/MLBook.html


The Stanford deep learning tutorials (http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutori...) have pretty good notes and exercises with detailed guidance / starter code. Depending on where exactly your hangups are, working through some of the coding exercises there might be useful.


http://www.metacademy.org/ is the best resource for this.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: