
Is deepmind moving back to (py)torch? - UCAN2
Hi, 
I am working on natural language entailments tasks with keras but despite some searching around I couldn&#x27;t find any convincing example of an seq2seq with attention model in Keras (or a neural cache model for that matter) 
On the other hand I just looked at pytorch for a couple of hours and dynamic&#x2F;control flow operations seems really easy to code in pytorch . 
Is everyone progressively moving to pytorch while only using tensorflow as a backend for keras ? Is the pytorch movement even spreading inside google ?
======
jamesmishra
I'm not a Googler/Deepminder, so I can't comment about what goes on inside the
company.

However, Google is making a hardware investment in Tensor Processing Units.
These presumably offer hardware acceleration for the static computation graphs
that TensorFlow produces, and PyTorch wouldn't be any good with them.

You're right that--as of this time of writing--there are no good seq2seq with
attention models in Keras. I think there are a few attempts on Github, but I
haven't tried them yet. I don't know anybody else that has tried seq2seq w/
attention in Keras yet either.

Additionally, TensorFlow has a seq2seq module, and it does come with an
attention mechanism. See
[https://github.com/google/seq2seq/blob/master/seq2seq/models...](https://github.com/google/seq2seq/blob/master/seq2seq/models/attention_seq2seq.py)

Anyway, I think the best thing for folks like you and me is to just keep using
PyTorch for research work, and use TensorFlow for certain deployment scenarios
where TensorFlow is superior--like mobile apps and Google Cloud.

