
TensorFlow 1.0.0-rc0 - tempw
https://github.com/tensorflow/tensorflow/releases/tag/v1.0.0-rc0
======
j_m_b
For those of you who want to try this out on macOS, I would suggest skipping
the GPU installation. It doesn't just install and run. Also, there aren't very
many video cards that come with Macs that have enough video memory to do even
the notMNIST exercises from the Udacity TensorFlow course.

If you still feel like you want to install the GPU version on a recent version
of macOS (I'm on 10.12.2), you'll see in the installation instructions to run
the command

$ sudo ln -s /Developer/NVIDIA/CUDA-8.0/lib/libcudnn* /usr/local/cuda/lib/

Yes, you need to do that. What you won't find is that you ALSO need to do a

export LD_LIBRARY_PATH=/usr/local/cuda/lib

to get it to work. Ignore the github issues that talk about disabling SIP (I
didn't need to do that) so that you can set a DYLD_LIBRARY_PATH or other
various calls to 'ln'. I just spent hours on figuring this out this week, so I
want to pass this on.

~~~
svantana
Even worse, I have the MBP with the 750M GPU (which I believe is the fastest
CUDA card in any MBP) and the models and batch sizes that do fit in memory are
not particularly faster than on the CPU, because they can't fully exploit the
huge parallelism. 2 GB memory sounds like a lot until you realize that a large
part of it is already being used for, you know, graphics. It's possible to get
the full 700 GFLOPS out of the gpu (versus ~300 for the cpu), but with most of
my models (many layers etc) I get about 200 GFLOPS. Even so, it can be handy
to offload the cpu for other tasks while it's running.

~~~
cr0sh
> 2 GB memory sounds like a lot until you realize that a large part of it is
> already being used for, you know, graphics.

That's strange? Got a screenshot or something showing that?

I have a GTX 750 ti SC (which I think is close to the 750M?) with 2 GB - I use
it on my Ubuntu 14.04 LTS box. nvidia-smi reports that only about 200 MB of
the 2 GB is used for for the graphics sub-system (I have a dual-head setup,
too).

Maybe the MBP uses more of the memory? I dunno (my MBP here has Intel
unfortunately)...

I know that training models on it is much faster than using the quad-core AMD
chip (can't remember which one it is - but nothing recent); still, I have hit
up against the memory limits, and at that point batching or other methods are
needed to get larger stuff thru it, at a penalty of speed.

I really need to drop something better in there...

------
jorgemf
Very nice library, but I hope they let us use GPUs in Google Cloud instead of
CPUs soon. This is a big non-sense to me, they provide a good library but they
don't provide a proper infrastructure to use it yet.

~~~
spankalee
You're probably better off using Google Cloud Machine Learning, which might by
running on the custom TPU accelerator (not sure).

~~~
davidmr
I've been spending several hours a day on it over the last couple weeks, and
I'm really frustrated with their Cloud ML beta.

I realize that beta means beta, but there's very little working code out there
that runs properly (especially Google's sample code), and in many cases their
documentation is either outdated or just plain missing.

It just doesn't feel like it's baked, even for one of their betas.

------
1024core
Can people recommend any good and extensive tutorials/courses on TF? I've
heard that the Udacity course is not that great.

