I'm so glad they fixed this, I've been running 1.0 for the last few months because the 1.1 release broke their own RNN tutorials and a lot of seq2seq code that is out there. I really, really love Tensorflow and understand it is a fast moving project but I hope they do more regression testing on their example code in the future. This is an exciting release though!
Sounds like a lack of external contributors maintaining it to me, are there really that many users? Everyone I know on macOS uses docker (or some other virtualisation) to run linux for small jobs and then connects remotely to linux boxes when they need more computing power.
Unofficially, there may be some people using Hackitoshs with rather beefy GPUs for machine learning.
There's a lot you can do easily on a $500 GPU that should take too long on CPU. And I prefer the shorter write/run/debug loop of working locally. It's the same niche other machine learning workstations fill, only with the preferred desktop OS.
There will also be external GPUs for Macs soon(ish), and those would be perfect for tensorflow. I'm not sure at that point they'll want it running on Macs again, and discontinuing support now may be the wrong decision.
I have a MacBook Pro from 2013 and I think it was the one before the last to have a Nvidia GPU.
Also, it's always an headache for me to get CUDA to work with Python/R libraries anytime I update the system and I have yet to find a straightforward guide that tells me exactly the steps to take... they all fail at some place.
> TensorFlow 1.1.0 will be the last time we release a binary with Mac GPU support. Going forward, we will stop testing on Mac GPU systems. We continue to welcome patches that maintain Mac GPU support, and we will try to keep the Mac GPU build working.
In other words, it still works (at least for now), and they'll accept patches to keep it working - they're just not going to explicitly test for it anymore.
And I'm sure you can imagine the reasons a large company wouldn't set up a Hackintosh. :) The writing is on the wall for being able to maintain a reliable test cluster, at least, for now, and broken tests are very very bad for being able to develop rapidly.
TLDR: Use Tensorflow for deeplearning, use Scikit for other ML algorithms.
To get started, keras is an excellent library that's build on top of tensorflow and has recently become an official part of it.
TFlearn is a high-level off-the-shelf library built on TensorFlow, giving you some of the benefits e.g. GPU.
It's hard to get state of the art results using off-the-shelf algorithms, unless your problem is very vanilla you typically need to get under the hood and do custom hyperparameters and tuning. That's why ML competitions like Kaggle are interesting, there are so many ways to skin the cat you can't capture them all in off-the-shelf libraries.
But you can get very useful results with a lot less effort using sklearn and TFlearn off-the-shelf.
Among other things, what this allows you to do is partition your computational graph into different subgraphs and run each subgraph on parallel.
Sklearn doesn't allow you to run things on parallel; however, I do agree that TF doesn't have a favorable learning curve, so you might want to start with SKlearn (or TFLearn) to get to know the basics of ML first.
"Oops. Since this experiment loads over 14,000 bird sounds, you'll need to view it on a desktop computer."
There should be some way of doing this manually. Any ideas?
1. Download MKL from Intel's website, install to /usr/local/lib/
2. Change tensorflow's configure script to look for the downloaded library on OS X instead of just aborting.
3. Possibly change some other bazel build files to look for .dylib instead of .so files.
4. Build with extra flags to look for the appropriate libraries.
I'm not sure if all these steps are necessary but they were sufficient.
The reason I had install to /usr/local/lib/ instead of Intel's suggestion of /opt/intel/something was that, with the latter, even though I passed the appropriate directory to the linker, I think there was still some intermediate binary that wasn't seeing that path. Putting the dylibs in the default directory solved that.
I can't contribute my patch because I did this on my employer's computer and it'd be an enormous hassle to work out the licensing stuff.
If you're working on Mac without an NVIDIA GPU, the best bet may be openCL. I've seen a lot of commits for that, and when it's ready I'd be surprised if even MacBook GPUs didn't run laps around CPUs.