Hacker News new | past | comments | ask | show | jobs | submit login
Natural Language Processing with Deep Learning (stanford.edu)
210 points by jonbaer 9 months ago | hide | past | web | favorite | 28 comments



The machine learning Stanford courses are probably the best open education contributions I've encountered. From Ng's CS229 material to Karpathy's rendition of CS231n. These are some of the best pedagogical materials on machine learning/deep learning available.


Unfortunately the videos for this course won't be made available until after the course has finished.



Thanks for the link, just subscribed.

Years ago I took Chris Manning and Dan Jurafsky‘s Coursera NLP class and it was excellent. I also own, I think, every book they have written including Jurafsky’s book on food.

It is extremely generous of top universities to make their classes available online. Of course, watching the videos and trying the homework assignments on one’s own is not the experience of going to Stanford, but it is less expensive!


Thx. This seems to be part of a playlist of 2019 lectures:

https://www.youtube.com/watch?v=8rXD5-xhemo&list=PLoROMvodv4...


I still remember Nick Parlante's Intro to CS. Great stuff.

https://lagunita.stanford.edu/courses/Engineering/CS101/Summ...


Yup! I made heavy use of his codingbat material during 1st year.

https://codingbat.com/java


True at heart, I am totally in love with CS231 (Yeung and Johnson)


That course is really the best blend of theory and application. I find their approach more tasteful than fast.ai's, but to each their own!


I took this class and can vouch for it. They update the class every year to go over recent research - not an easy task in such a fast moving field. For example, this offering covers the Transformer architecture which has recently been used to obtain state of the art results across a wide range of NLP tasks.


Tangentially related if you're interested in keeping up to date with what's going on in the field:

Sebastian Ruder's blog has many good posts on recent advancements in NLP (literature reviews, conference highlights): http://ruder.io/

The posts are concise and accessible enough that you can skim through them quickly. Then you can go check out the paper directly if something piques your interest.


I know this talking point has been brought up by different people, but it's worth pointing out that Transformers were already covered in the class in 2018. https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1184...


Are you a student at Stanford ? I was wondering if I can join just for this course...


The videos from the winter 2017 offering are freely available on YouTube.

https://www.youtube.com/playlist?list=PLqdrfNEc5QnuV9RwUAhoJ...

The assignments are at the class webpage too.


And this is the 2019 playlist: https://www.youtube.com/watch?v=8rXD5-xhemo&list=PLoROMvodv4...

Quote from course page: "This year, CS224n will be taught for the first time using PyTorch rather than TensorFlow (as in previous years)."


Is there a good course for standalone Natural Language Processing?

I came across MIT OCW’s Advanced NLP - https://ocw.mit.edu/courses/electrical-engineering-and-compu... , but does someone has a similar or a better standalone NLP course in mind? From MIT OCW, Stanford , Coursera, Udacity, et al?

I am very much interested. Could be free or paid.

Even PDF’s of lecture notes could do.

Edit - Found one standalone NLP course by CMU, looks good - http://demo.clab.cs.cmu.edu/NLP/


Stanford NLP course by Manning and Jurafsky is a go-to for me. https://www.youtube.com/watch?v=3Dt_yh1mf_U&list=PLQiyVNMpDL...


Thanks for this :)


The following is also good:

https://web.stanford.edu/class/cs224u/

While you asked for a course, checkout my introductory presentation on the subject here (PDF slides available for download):

https://www.siliconvalley-codecamp.com/Session/2014/introduc...

An attendee, claiming to have read 1.5 books on NLP, called it the broadest coverage of NLP he had ever seen. Some of the attendees told me that I had inspired them to turn their careers to NLP.


I skimmed through your PDF. It's very exhaustive and all-encompassing.

But what really got me excited was that you had included 2 of Steven Pinker's books in your "Suggested References". Few weeks ago, I was going over the Wikipedia entry this book by Steven Pinker - "A Sense of Style", and wondering if it has any implications on linguistics.

https://en.wikipedia.org/wiki/The_Sense_of_Style

So, my question to you is, does "The Sense of Style" bears any NLP-relevant context in it?

Thanks for such an awesome PDF :) Can't wait for my day to end!


Glad that you like it so far. :-)

I wish I had recorded the session I gave. It was several times more interesting and engaging than what the slides read on their own.

I haven't read this book, but have read a blog by Pinker on the same lines. I just also skimmed through the Wikipedia link you mentioned.

The book would not be a recommendation from NLP algorithms perspective. I am sure though that it would be a good book as Pinker is a mind-opening writer.

I'll nevertheless give you some deep food for thought relating to languages:

Check out slide 14 in my talk, especially the remark on its right side. It is a powerful thought.

There is an equilibrium process involved in the shaping of the language over the time, though language must by definition have at least some standardization for it to work, which means it would resist change.

What Pinker is saying at the high level is that the official rules of grammar sometimes deviate away from the equilibrium point. For example, technical jargon and acronyms are often easier for the speaker than the listener. A poor handwriting is likely less tiring for the writer than easier for the reader.

There would also be deviations which make it harder for both the speaker and the listener.

Yet, in both of the cases above, the language would resist change.

As writing came into being (keep in mind that we have been talking for a few million years, and writing only for a few thousand, so no comparison!), written material stands for much longer than vibrations in air molecules, or our memories, do. That further slows down language velocity as standardization now sits across time too.

We now stand where language is standardizing across the world, and getting frozen in the Internet, there's further velocity reduction involved. (Albeit, newer concepts are adding increasingly faster to the languages...)

How do you, in such cases, make the above deviations from the optimum go away?

There are some rules of language, grammar, that shouldn't be. They are like legacy code.

Pinker is educating us about such rules. He's trying to make style come back to that optimal.

Let's take a simple example. Did you know, comma classically sits inside quotes? Here's an example I picked from [1]:

"Good morning, Frank," said Hal.

Note that the comma after Frank is inside the quotes! Why should it be that way? No wonder putting comma outside is gaining acceptance. :-)

There are even weird rules for what happens when multiple paragraphs are to be included inside a single quote. (Hints: Number of opening and closing quotes is not equal in some English dialects under this scenario. Weird, hmmm.)

These rules should just go away. It's better to choose the optimals for the language and make those cultural, style, shifts happen in our language.

Take care. Good discussion. Feel free to reach out again.

And please feel free to refer others to my slides page as you see fit. :-)

[1] https://en.m.wikipedia.org/wiki/Quotation_marks_in_English


Wow, the info you have provided is so much valuable.

What you said above definitely sounds like Steven Pinker-ish.

I feel like I am embarking on a learning process that’s gonna stay with me for a long time, courtesy our short correspondence.

I know I will be needing you again, so I am going to save your email ID in my contacts. I will send you a Hi on your email ID , if you don’t mind?

And I am recommending your PDF to my batchmates who have enrolled for the same course :)

Muchas gracias for the wonderful explaination!


Sure! alokgovil hot mail :-)

Also, feel free to connect on LinkedIn: /in/alokgovil.


Hi Alok,

Thanks for taking interest in my question and responding :)

Your paper sounds very exciting and promising, and I am going to finish reading it over the coming weekend.


Skimming through I came across a fun example (new to me at least) in the introduction to translation, lecture 8 slides page 55.

"hn hn hn hn hn hn hn hn hn hn hn hn hn hn hn hn hn hn hn" in Somali gives "Serious Therapic Anxiety syndrome" on google translate.

Perhaps not surprising, but fun. The link from the slide (below) has previously been submitted to hn without notable discussion.

https://www.skynettoday.com/briefs/google-nmt-prophecies


Chris Manning is responsible for so much work in this field. I can't wait to go through this video series. Thanks for all your hard work and I want you to know how much it's appreciated.


I also like CMU's neural net for nlp class: https://www.youtube.com/watch?v=pmcXgNTuHnk


Great lectures, with a lot of intuition about recent research and clear explanations.

Some videos are already online, starting with https://youtu.be/8rXD5-xhemo




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: