
Show HN: A general-purpose encoder-decoder framework for Tensorflow - dennybritz
https://github.com/google/seq2seq
======
aduffy
There is already a seq2seq in the tree under contrib, is this one different
from/replacing it?

[https://github.com/tensorflow/tensorflow/tree/master/tensorf...](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/seq2seq)

~~~
dennybritz
This is a good question and we should probably add this to the FAQ.
tf.contrib.seq2seq is a low-level library that you can use to _build_ seq2seq
models; it is used internally by this project. The key difference here is that
the google/seq2seq is an end-to-end pipeline that you can run with your own
data and that comes with a lot of bells and whistles.

------
rotten
To someone not active in the Tensorflow community, it is really not obvious
what this is for. What are typical use cases? Why does the world need this?

~~~
aduffy
Encoder-Decoder models are a very common technique in sequence-to-sequence
models for deep learning. These models have had very big wins lately in NLP
tasks such as translation, POS tagging, dialogue generation, etc.

The Tensorflow documentation has an okay writeup about seq2seq models:
[https://www.tensorflow.org/tutorials/seq2seq](https://www.tensorflow.org/tutorials/seq2seq)

The author of the library also has a small blurb about it on his blog:
[http://www.wildml.com/deep-learning-
glossary/#seq2seq](http://www.wildml.com/deep-learning-glossary/#seq2seq)

------
braindead_in
Are there any examples of seq2seq networks being used for tasks other than
NLP? For example, can it be used for something like noise removal?

~~~
dennybritz
Yes, these models can applied to a lot of non-NLP tasks. For example, I've
seen seq2seq models applied to medical record prediction, program generation,
etc. Noise removal seems like a good candidate.

