
Ask HN: Building a Deep Learning Workstation: Intel Phi and New Pascal Titan X? - ActsJuvenile
I was running experiments on Torch and TensorFlow, and noticed that my old CPU was bottlenecking the GPU.  One NVidia advisor I talked to corroborated that GPU acceleration mostly helps convolution layers, and rest of the layers are CPU restricted.  That&#x27;s why NVidia are using dual 20-core Xeon CPUs to feed the Tesla GP100s in DGX-1.<p>This nugget of information in hand, I started researching massive multiple-core CPUs and came across Developer Preview of Intel Xeon Phi machine:<p>Specs and price: http:&#x2F;&#x2F;dap.xeonphi.com&#x2F;#platformspecs
Video preview: https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=s2Z3O32am9I<p>It looks like a solid workstation with 64-core Xeon Phi, which is fully binary-compatible with normal Xeon.  I was wondering if I buy their liquid cooled workstation for $5K and stick two NVidia Pascal Titan X in PCIx16 slots, would it be a good price-performance combo?<p>I understand AVX-512 will be wasted, but my purpose is to use 64 cores for normal x86 compute threads that feed the Titans. Seems like this system can match NVidia Digits Workstation for half the price, which is my main objective.<p>Thoughts?  Critiques?  Suggestions?<p>Thanks!
======
wandering_logic
Bottle necked on the CPU, yes, but probably only on a single (or a small
number) of threads on the CPU. Xeon Phi will give you lots of slow threads.
For the CPU you'd be better off getting a low-core-count processor with the
best clock rate you can find/afford. The gamer-oriented i7 extremes are
usually good.

~~~
ActsJuvenile
True - I am considering the 3.7 to 4GHz i7 processors as well because of
higher clock speed.

Some things that are enticing towards the Phi, are 1) 16GB on chip MCDRAM -
that is a lot of cache! 2) I can get a copy of Intel Parallel Studio XE
compiler and compile Lua and Torch from source code, thus making them use more
cores on Phi.

I have reached out to Intel AI developers and NVidia cuDNN team to ask their
take on it. Let's see what they advise.

~~~
mnbbrown
Did you ever anything back from the intel or nvidia guys?

