If you still feel like you want to install the GPU version on a recent version of macOS (I'm on 10.12.2), you'll see in the installation instructions to run the command
$ sudo ln -s /Developer/NVIDIA/CUDA-8.0/lib/libcudnn* /usr/local/cuda/lib/
Yes, you need to do that. What you won't find is that you ALSO need to do a
to get it to work. Ignore the github issues that talk about
disabling SIP (I didn't need to do that) so that you can set a DYLD_LIBRARY_PATH or other various calls to 'ln'. I just spent hours on figuring this out this week, so I want to pass this on.
That's strange? Got a screenshot or something showing that?
I have a GTX 750 ti SC (which I think is close to the 750M?) with 2 GB - I use it on my Ubuntu 14.04 LTS box. nvidia-smi reports that only about 200 MB of the 2 GB is used for for the graphics sub-system (I have a dual-head setup, too).
Maybe the MBP uses more of the memory? I dunno (my MBP here has Intel unfortunately)...
I know that training models on it is much faster than using the quad-core AMD chip (can't remember which one it is - but nothing recent); still, I have hit up against the memory limits, and at that point batching or other methods are needed to get larger stuff thru it, at a penalty of speed.
I really need to drop something better in there...
It is bronken since I have all the infrastructure in amazon, so why I am going to move everything to Google in a year when the GPUs are ready?
I realize that beta means beta, but there's very little working code out there that runs properly (especially Google's sample code), and in many cases their documentation is either outdated or just plain missing.
It just doesn't feel like it's baked, even for one of their betas.