
Tensorflow 1.5.0 - connorgreenwell
https://github.com/tensorflow/tensorflow/releases/tag/v1.5.0
======
minimaxir
The big feature is CUDA 9 and cuDNN 7 support, which promises double-speed
training on Volta GPUs/FP16. (it should be noted that TF 1.5 does _not_
support CUDA 9.1 yet, which I found out the hard way)

I updated my Keras container with the TF 1.5 RC, CUDA 9, and cuDNN 7
([https://github.com/minimaxir/keras-cntk-
docker](https://github.com/minimaxir/keras-cntk-docker)), but did not notice a
significant speed increase on a K80 GPU (I'm unsure if Keras makes use of FP16
yet either).

~~~
puzzle
You need Pascal hardware or later for FP16. Amazon's G3 Maxwell instances,
which are newer than K80s, don't support it, either.

~~~
adrianmacneil
P3 instances have Volta (V100) GPUs.

------
NelsonMinar
Eager execution is appealing for folks new to learning TensorFlow. The
deferred execution style is powerful, but if you just want to tinker in a REPL
it's nice to have imperative programming.
[https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflo...](https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflow/contrib/eager)

~~~
pacala
What if I have data-dependent computational graphs? For example, recursive
NNs.

~~~
akshayka
I'm a member of the team that works on eager execution.

When eager execution is enabled, you no longer need to worry about graphs:
operations are executed immediately. The upshot is that eager execution lets
you implement dynamic models, like recursive NNs, using Python control flow.
We've published some example implementations of such models on Github:

[https://github.com/tensorflow/tensorflow/tree/master/tensorf...](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples)

I'd be happy to answer other questions about eager execution, and feedback is
welcome.

EDIT: Just because you don't have to worry about graphs doesn't mean that
graph construction and eager execution aren't related; take a look at our
research blog post for more information if you're curious about the ways in
which they relate to each other
([https://research.googleblog.com/2017/10/eager-execution-
impe...](https://research.googleblog.com/2017/10/eager-execution-imperative-
define-by.html)).

~~~
ajayka
Looking forward to trying out eager!

------
yolobey
I've been dreading version updates ever since they dropped Mac binary support.
There are always obscure things to patch I have to find out by myself, the
build easily wastes a whole day.

I think I'm either going to change my workflow and use another OS or switch
fully to PyTorch.

~~~
matt4077
I have recently compiled the -rc1 on MacOS 10.12, and the process has
basically remained unchanged from 1.4. Here's a gist with my notes:
[https://gist.github.com/MatthiasWinkelmann/41dd42b07f8bd794a...](https://gist.github.com/MatthiasWinkelmann/41dd42b07f8bd794aa462f3b9712c018)

(I still used CUDA 8, but 9 should also work. You just need to find the
version of the command line tools it works with)

------
wmf
And zero mention of AMD support.

~~~
htsh
Until there's something like CUDA for AMD isn't this going to be difficult? I
don't know the area well, just curious.

(How good is openCL when it comes to this sort of stuff? Could they support it
without crazy effort?)

~~~
wmf
AMD is porting TensorFlow, but these release notes give the impression that
upstream is not helping in any way.
[https://github.com/ROCmSoftwarePlatform/hiptensorflow/blob/h...](https://github.com/ROCmSoftwarePlatform/hiptensorflow/blob/hip/README.ROCm.md)

------
arunmandal53
For anyone who is having trouble with the installation, here's a tutorial to
install TensorFlow 1.5.0 official pre-built pip package for both CPU and GPU
version on Windows and ubuntu also there is tutorial to build tensorflow from
source for cuda 9.1. [http://www.python36.com](http://www.python36.com)

------
zitterbewegung
So is it an easy process to convert a tensorflow program from 1.4 to 1.5? When
I tried converting something from 0.9 to 1.0 I couldn't figure it out.

~~~
connorgreenwell
For most cases it should just be a drop in replacement. IIRC they promise not
to break the API between point releases (except tf.contrib.* which may change
or disappear entirely...)

~~~
woodson
That promise is only for the Python API, not C++/Java.

------
blueyes
> Starting from 1.6 release, our prebuilt binaries will use AVX instructions.
> This may break TF on older CPUs.

~~~
htsh
This is primarily pre-2011 CPUs, though? Looks like everything since Sandy
Bridge and Bulldozer will be okay.

I guess we never know what's running on our cloud instances.

~~~
jabl
IIRC since the beginning tensorflow has required at least sm 3.0 support
(Kepler or newer). I imagine the combination of a pre-AVX cpu and Kepler or
newer gpu is uncommon.

