
Exploiting Similarities Among Languages for Machine Translation (2013) - ColinWright
http://arxiv.org/abs/1309.4168
======
deepnet
This work indicates that language translation is vector addition on a lower
dimensional semantic manifold that neural networks converge on, derived from
the high dimensional whole vocabulary context vectors of the input.

It is extended by :

Sequence to Sequence Learning with Neural Networks by Sutskever, Vinyals & Le

[http://arxiv.org/abs/1409.3215](http://arxiv.org/abs/1409.3215)

which adds LTSM, a trainable deep neural net type of long term memory - which
allows whole sentences to be digested and translated.

" Our main result is that on an English to French translation task from the
WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of
34.8 on the entire test set, ... Additionally, the LSTM did not have
difficulty on long sentences."[1]

another interesting result is the _translation_ of pixels to words that
describe the image.

Deep Fragment Embeddings for Bidirectional Image Sentence Mapping by Karpathy,
Joulin, & Li

[http://papers.nips.cc/paper/5281-deep-fragment-embeddings-
fo...](http://papers.nips.cc/paper/5281-deep-fragment-embeddings-for-
bidirectional-image-sentence-mapping)

code & a demo is available :

[http://cs.stanford.edu/people/karpathy/deepimagesent/](http://cs.stanford.edu/people/karpathy/deepimagesent/)

