
A Primer on Neural Network Models for Natural Language Processing (2016) [pdf] - mpweiher
https://www.jair.org/media/4992/live-4992-9623-jair.pdf
======
AronTrask
Prior discussion:
[https://news.ycombinator.com/item?id=15006013](https://news.ycombinator.com/item?id=15006013)

~~~
wodenokoto
I believe this is the "expanded" version mentioned in that discussion.

~~~
AronTrask
I think this is "just" a cleaned up version of the draft included in the
previous discussion. I believe the expanded version mentioned in the
aforementioned link is a longer book which grew out of this paper.

~~~
wodenokoto
You're absolutely right. I looked at the printed page number of the last page
and thought "wow 400 pages!".

Apologies for jumping the gun.

------
desku
I actually found this to be one of the best explanations on this topic I've
read. Fully recommend the author's book too.

Also recommend this as it follows it/is an alternative:
[https://arxiv.org/abs/1703.01619](https://arxiv.org/abs/1703.01619)

~~~
wodenokoto
Are you sure you linked to the right article? The linked article is about
neural translations using seq-to-seq while TFA is about neural models for all
kinds of language processing.

~~~
desku
It's probably domain/industry/company dependent, but the vast majority (>90%)
of NLP work I do nowadays is sequence-to-sequence models.

