Hacker News new | past | comments | ask | show | jobs | submit login
Tensorflow v1.2 released (github.com)
174 points by yuanchuan on June 17, 2017 | hide | past | web | favorite | 42 comments



"RNNCell objects now subclass tf.layers.Layer. The strictness described in the TensorFlow 1.1 release is gone: The first time an RNNCell is used, it caches its scope. All future uses of the RNNCell will reuse variables from that same scope. "

I'm so glad they fixed this, I've been running 1.0 for the last few months because the 1.1 release broke their own RNN tutorials and a lot of seq2seq code that is out there. I really, really love Tensorflow and understand it is a fast moving project but I hope they do more regression testing on their example code in the future. This is an exciting release though!


I wouldn't upgrade just yet if I were you, because part of the seq2seq code is still broken - more specifically, "there seems to be a problem with deepcopy of RNNCell"[1]. The bug is still open[2], and has been for a while.

[1] https://stackoverflow.com/a/44594376

[2] https://github.com/tensorflow/tensorflow/issues/8191


What is exciting about it, do you think?


Other than unbreaking the feature I use most often? Haha, I think the new versions of TensorBoard and the SavedModel CLI are great for getting a better sense of what is going on under the hood. But I'm just generally excited by the framework hitting new releases, clearing bugs, and becoming more mature.


Did they explain why they decided to stop supporting GPU for Mac OS X? That's going to make a lot of developers think twice before upgrading.


> TensorFlow 1.1.0 will be the last time we release a binary with Mac GPU support. Going forward, we will stop testing on Mac GPU systems. We continue to welcome patches that maintain Mac GPU support, and we will try to keep the Mac GPU build working.

Sounds like a lack of external contributors maintaining it to me, are there really that many users? Everyone I know on macOS uses docker (or some other virtualisation) to run linux for small jobs and then connects remotely to linux boxes when they need more computing power.


Officially, there shouldn't be very many people for whom it's relevant. The last Macs with Nvidia GPUs were sold around 2011 if I remember correctly.

Unofficially, there may be some people using Hackitoshs with rather beefy GPUs for machine learning.

There's a lot you can do easily on a $500 GPU that should take too long on CPU. And I prefer the shorter write/run/debug loop of working locally. It's the same niche other machine learning workstations fill, only with the preferred desktop OS.

There will also be external GPUs for Macs soon(ish), and those would be perfect for tensorflow. I'm not sure at that point they'll want it running on Macs again, and discontinuing support now may be the wrong decision.


My late 2013 MBP has a Nvidia 750M


Doing a little binary search on Everymac, it seems that the last new MBP model with Nvidia graphics was mid-2014. Both offerings with discrete graphics from mid-2015 had AMD cards. I'm also fairly sure you could still buy those mid-2014 MBPs well into 2015; I distinctly remember seeing both AMD and Nvidia MBPs available in the online store around that time.

http://www.everymac.com/systems/apple/macbook_pro/specs/macb...


It just means OSX users (like me) have to compile from source - no different from PyTorch. Happy to provide compiled binaries for 10.12, though it's a bit of a chore to get Xcode clang and CUDA to play nice together.


For now–I've thought about getting involved in keeping that going, even trying to set up a travis build. But bazel, I, and C++, we simply don't enjoy the time we spend together.


FWIW, 1.2 does still build on MacOS. But you have to build it yourself. I did run into a fair amount of problems, but ultimately I'm not sure if those weren't of my own making.


I think because now they have AMD GPUs for some years.

I have a MacBook Pro from 2013 and I think it was the one before the last to have a Nvidia GPU.

Also, it's always an headache for me to get CUDA to work with Python/R libraries anytime I update the system and I have yet to find a straightforward guide that tells me exactly the steps to take... they all fail at some place.


Note: As of version 1.2, TensorFlow no longer provides GPU support on Mac OS X.


To be a bit more precise, the changelog says:

> TensorFlow 1.1.0 will be the last time we release a binary with Mac GPU support. Going forward, we will stop testing on Mac GPU systems. We continue to welcome patches that maintain Mac GPU support, and we will try to keep the Mac GPU build working.

In other words, it still works (at least for now), and they'll accept patches to keep it working - they're just not going to explicitly test for it anymore.


Is there any explanation for why they decided to do this? I would imagine they just don't have the means to test on Mac anymore but I'd like to know why for sure.


Macs don't have Nvidia GPUs and tensorflow is only supported on Nvidia.


I have an MBP and an iMac, both of which came with nvidia GPU's. The MBP is older, but I doubt either of these are atypical machines out there today.


They probably are atypical systems for people using TensorFlow professionally though.


I am not sure what the current progress on OpenCL support is, but Tensorflow's configure script definitely asks whether you want to install Tensorflow with OpenCL support.


The last Mac to ship with an NVidia GPU was the 2014 MBP. That was three years ago, and is starting to exit most companies' hardware refresh cycle. You can buy an external thunderbolt box and try putting an nvidia GPU in it, but the driver support isn't always functional. For an example of some of the hoops you have to go through, see: https://9to5mac.com/2017/04/11/hands-on-powering-the-macbook...

And I'm sure you can imagine the reasons a large company wouldn't set up a Hackintosh. :) The writing is on the wall for being able to maintain a reliable test cluster, at least, for now, and broken tests are very very bad for being able to develop rapidly.


Apple puts GPUs in these things? Sorry, I thought it only put Intel Irises. /s



Thanks! We've updated the link from the homepage.


Can someone explain why should I pick this over scikit? I don't have any ML exp. I found ML quite magically :/ and totally difficult to start if you don't have a phd in mathematics


ML/DL requires only sophomore math. Calculus, linear algebra and probability covers the majority of what you need.


Scikit is for non-deep learning machinelearning algorithms only (It does support neural networks in the latest version, but only uses the cpu to train data).

TLDR: Use Tensorflow for deeplearning, use Scikit for other ML algorithms.


Scikit doesn't support GPUs, which makes it infeasible to run the sort of deep learning stuff that's currently making waves. The competitors to tensorflow are torch, caffe, and maybe Microsoft's CN(something, but not "Y")K.

To get started, keras is an excellent library that's build on top of tensorflow and has recently become an official part of it.


Microsoft CNTK. Originally Computational Network Toolkit. Abbreviation remains but now Microsoft Cognitive Toolkit.


TensorFlow is low-level. scikit or sklearn is high level off-the-shelf ML, apply this algorithm to this dataset with these parameters.

TFlearn is a high-level off-the-shelf library built on TensorFlow, giving you some of the benefits e.g. GPU.

It's hard to get state of the art results using off-the-shelf algorithms, unless your problem is very vanilla you typically need to get under the hood and do custom hyperparameters and tuning. That's why ML competitions like Kaggle are interesting, there are so many ways to skin the cat you can't capture them all in off-the-shelf libraries.

But you can get very useful results with a lot less effort using sklearn and TFlearn off-the-shelf.


Didn't tflearn become tensorflow.contrib.learn? So there's technically a high level API in the TF standard library. Though Keras would probably be a better choice if OP wanted a high level API for deep learning.


A version of Keras is included in Tensorflow since 1.1 as well, under `tensorflow.contrib.keras.python.keras`. (This has been particularly useful for me as a Windows user, since my GTX 1080 is stuck in a Windows machine, and it's a little more painful to deal with upstream Keras dependencies on Windows, but TF alone works great)


Tensorflow implements the (really abstract, at least for me) concept of a computational graph. Think of it as nodes being values and edges operations.

Among other things, what this allows you to do is partition your computational graph into different subgraphs and run each subgraph on parallel.

Sklearn doesn't allow you to run things on parallel; however, I do agree that TF doesn't have a favorable learning curve, so you might want to start with SKlearn (or TFLearn) to get to know the basics of ML first.


I wonder when would they start supporting OpenCL :( I want to use my Radeon GPU


While direct support from the creators of TF would be the beste thing, be sure to check out all the addon options, like https://github.com/hughperkins/tf-coriander


Agreed. Hardware lock-in is nasty business, and with tools like HIP it shouldn't be too difficult.


Site is desktop only:

"Oops. Since this experiment loads over 14,000 bird sounds, you'll need to view it on a desktop computer."


I think you wanted to comment on https://news.ycombinator.com/item?id=14577014


What about Java?


Thank you for the release! There is an submitted issue because the Intel MKL support does not work with Mac OS X, only Linux.

There should be some way of doing this manually. Any ideas?


I've managed to compile tensorflow with MKL on Mac OS X. The ingredients were roughly:

1. Download MKL from Intel's website, install to /usr/local/lib/

2. Change tensorflow's configure script to look for the downloaded library on OS X instead of just aborting.

3. Possibly change some other bazel build files to look for .dylib instead of .so files.

4. Build with extra flags to look for the appropriate libraries.

I'm not sure if all these steps are necessary but they were sufficient.

The reason I had install to /usr/local/lib/ instead of Intel's suggestion of /opt/intel/something was that, with the latter, even though I passed the appropriate directory to the linker, I think there was still some intermediate binary that wasn't seeing that path. Putting the dylibs in the default directory solved that.

I can't contribute my patch because I did this on my employer's computer and it'd be an enormous hassle to work out the licensing stuff.


Are there any comparisons of MKL to the GPU versions? Because this appears to be an attempt by Intel to stay relevant in the field, and I'm sceptical when the hardware vendors are creating implementations that they couldn't get projects to do themselves.

If you're working on Mac without an NVIDIA GPU, the best bet may be openCL. I've seen a lot of commits for that, and when it's ready I'd be surprised if even MacBook GPUs didn't run laps around CPUs.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: