Hacker News new | past | comments | ask | show | jobs | submit login
TensorFlow 1.0.0-rc0 (github.com)
103 points by tempw on Jan 27, 2017 | hide | past | web | favorite | 16 comments



For those of you who want to try this out on macOS, I would suggest skipping the GPU installation. It doesn't just install and run. Also, there aren't very many video cards that come with Macs that have enough video memory to do even the notMNIST exercises from the Udacity TensorFlow course.

If you still feel like you want to install the GPU version on a recent version of macOS (I'm on 10.12.2), you'll see in the installation instructions to run the command

$ sudo ln -s /Developer/NVIDIA/CUDA-8.0/lib/libcudnn* /usr/local/cuda/lib/

Yes, you need to do that. What you won't find is that you ALSO need to do a

export LD_LIBRARY_PATH=/usr/local/cuda/lib

to get it to work. Ignore the github issues that talk about disabling SIP (I didn't need to do that) so that you can set a DYLD_LIBRARY_PATH or other various calls to 'ln'. I just spent hours on figuring this out this week, so I want to pass this on.


Even worse, I have the MBP with the 750M GPU (which I believe is the fastest CUDA card in any MBP) and the models and batch sizes that do fit in memory are not particularly faster than on the CPU, because they can't fully exploit the huge parallelism. 2 GB memory sounds like a lot until you realize that a large part of it is already being used for, you know, graphics. It's possible to get the full 700 GFLOPS out of the gpu (versus ~300 for the cpu), but with most of my models (many layers etc) I get about 200 GFLOPS. Even so, it can be handy to offload the cpu for other tasks while it's running.


> 2 GB memory sounds like a lot until you realize that a large part of it is already being used for, you know, graphics.

That's strange? Got a screenshot or something showing that?

I have a GTX 750 ti SC (which I think is close to the 750M?) with 2 GB - I use it on my Ubuntu 14.04 LTS box. nvidia-smi reports that only about 200 MB of the 2 GB is used for for the graphics sub-system (I have a dual-head setup, too).

Maybe the MBP uses more of the memory? I dunno (my MBP here has Intel unfortunately)...

I know that training models on it is much faster than using the quad-core AMD chip (can't remember which one it is - but nothing recent); still, I have hit up against the memory limits, and at that point batching or other methods are needed to get larger stuff thru it, at a penalty of speed.

I really need to drop something better in there...


On some Windows & Linux machines that have an Intel mobile CPU and a discrete graphics chip, it is possible to switch the display to the Intel GPU. Is something like that possible on a MBP?


I think you can just set DYLD_FALLBACK_LIBRARY_PATH to /Developer/NVIDIA/CUDA-8.0/lib and not have to copy anything around.


Very nice library, but I hope they let us use GPUs in Google Cloud instead of CPUs soon. This is a big non-sense to me, they provide a good library but they don't provide a proper infrastructure to use it yet.


It's not non-sense. They're two different things that are/were in beta. They will eventually work together(likely sooner than later) but just because they didn't on day 0 doesn't make it fundamentally broken.


TensorFlow has been in beta for a year, GPU instances have been in alpha for a couple of months and there are no news about TPUs.

It is bronken since I have all the infrastructure in amazon, so why I am going to move everything to Google in a year when the GPUs are ready?


IIRC, their plan is to offer custom TPU instances.


I know, but doesn't make sense to offer them in a couple of years. When we already have TensorFlow running in GPUs in amazon or other companies. Actually we made numbers in my company and Amazon GPUs are cheaper than Google Cloud with the current pricing. I am talking about more than 10 million inferences a day.


You're probably better off using Google Cloud Machine Learning, which might by running on the custom TPU accelerator (not sure).


I've been spending several hours a day on it over the last couple weeks, and I'm really frustrated with their Cloud ML beta.

I realize that beta means beta, but there's very little working code out there that runs properly (especially Google's sample code), and in many cases their documentation is either outdated or just plain missing.

It just doesn't feel like it's baked, even for one of their betas.


GPUs are in alpha and no news about TPUs, that is what doesn't make sense.



That is why I said yet. They are in alpha... https://cloud.google.com/gpu/


Can people recommend any good and extensive tutorials/courses on TF? I've heard that the Udacity course is not that great.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: