Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] A Primer on Neural Network Models for Natural Language Processing (2016) [pdf] (jair.org)
94 points by mpweiher on Aug 16, 2017 | hide | past | favorite | 8 comments




Not much "discussion", there either - as well as (not) on the truly first link back in its day that didn't get up-voted: https://news.ycombinator.com/item?id=10338114

Whatever, to give some opinion: I've read all of Goldberg's stuff and always think it's very excellent. If you are into (statistical) NLP, his work certainly has "sine qua non" rating...

EDIT: Oh, but yes, you can skip this and read the book if you are that interested in this stuff and can shell out the bucks - it's more complete and better redacted. (EDIT2: By which I mean the book is more up-to-date with refs from 2017, etc., not that the writing in the linked article is poor or anything!)


I believe this is the "expanded" version mentioned in that discussion.


I think this is "just" a cleaned up version of the draft included in the previous discussion. I believe the expanded version mentioned in the aforementioned link is a longer book which grew out of this paper.


You're absolutely right. I looked at the printed page number of the last page and thought "wow 400 pages!".

Apologies for jumping the gun.


I actually found this to be one of the best explanations on this topic I've read. Fully recommend the author's book too.

Also recommend this as it follows it/is an alternative: https://arxiv.org/abs/1703.01619


Are you sure you linked to the right article? The linked article is about neural translations using seq-to-seq while TFA is about neural models for all kinds of language processing.


It's probably domain/industry/company dependent, but the vast majority (>90%) of NLP work I do nowadays is sequence-to-sequence models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: