The big feature is CUDA 9 and cuDNN 7 support, which promises double-speed training on Volta GPUs/FP16. (it should be noted that TF 1.5 does not support CUDA 9.1 yet, which I found out the hard way)
I updated my Keras container with the TF 1.5 RC, CUDA 9, and cuDNN 7 (https://github.com/minimaxir/keras-cntk-docker), but did not notice a significant speed increase on a K80 GPU (I'm unsure if Keras makes use of FP16 yet either).
When I say K80, I mean running a GPU in the cloud, not running it locally (and as an aside, due to the crypto boom, buying a physical GPU for cheap is more difficult)
A nice compromise is the GTX1080 that you can rent at Hetzner for 100e/month [0]. I am not sure how it compares to other GPUs but for experimenting it works quite well.
It's actually 118e/month (the price on the webiste are excluding VAT).
I'm using it, and I can only recommend - 0 problems, pretty decent box for a low price. Great solution, and waiting for the crypto bubble to burst and GPU prices to fall !
oh yeah. what i meant was that as k80 s are old, they don't have the fancy features like being fast with fp16, which came later. k80 come from slightly before the deep learning explosion afaik
Eager execution is appealing for folks new to learning TensorFlow. The deferred execution style is powerful, but if you just want to tinker in a REPL it's nice to have imperative programming. https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflo...
I'm a member of the team that works on eager execution.
When eager execution is enabled, you no longer need to worry about graphs: operations are executed immediately. The upshot is that eager execution lets you implement dynamic models, like recursive NNs, using Python control flow. We've published some example implementations of such models on Github:
I'd be happy to answer other questions about eager execution, and feedback is welcome.
EDIT: Just because you don't have to worry about graphs doesn't mean that graph construction and eager execution aren't related; take a look at our research blog post for more information if you're curious about the ways in which they relate to each other (https://research.googleblog.com/2017/10/eager-execution-impe...).
I've been dreading version updates ever since they dropped Mac binary support. There are always obscure things to patch I have to find out by myself, the build easily wastes a whole day.
I think I'm either going to change my workflow and use another OS or switch fully to PyTorch.
For anyone who is having trouble with the installation, here's a tutorial to install TensorFlow 1.5.0 official pre-built pip package for both CPU and GPU version on Windows and ubuntu also there is tutorial to build tensorflow from source for cuda 9.1. http://www.python36.com
For most cases it should just be a drop in replacement. IIRC they promise not to break the API between point releases (except tf.contrib.* which may change or disappear entirely...)
0.9 wasn't production ready and they didn't guarantee backward compatibility until 1.0. So nothing to change from 1.4 to 1.5, maybe you will have some warnings about features that will change in the future, but it will work.
IIRC since the beginning tensorflow has required at least sm 3.0 support (Kepler or newer). I imagine the combination of a pre-AVX cpu and Kepler or newer gpu is uncommon.
I updated my Keras container with the TF 1.5 RC, CUDA 9, and cuDNN 7 (https://github.com/minimaxir/keras-cntk-docker), but did not notice a significant speed increase on a K80 GPU (I'm unsure if Keras makes use of FP16 yet either).