
Deep learning box for $1700 - pplonski86
https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415
======
jph00
FYI the OP also created a really nice way to create persistent spot instances
on AWS - see approach 2 here:
[http://wiki.fast.ai/index.php/AWS_Spot_instances](http://wiki.fast.ai/index.php/AWS_Spot_instances)

------
Matthias247
If the boxed fans have not improved significantly in the last years then a
good heatsink/fan would have been a very good investment. At least if one sits
in the same room where the box is running.

------
joshu
Wrong CPU. Only 16 PCI lanes.

~~~
barrkel
PCIe lane width isn't normally a significant bottleneck for GPU performance,
for games at least. Going from 16x to 8x loses you 1% or so, depending on the
hardware. I don't know the bus traffic profile of software being run for deep
learning, but it may not be noticeable.

I'd also pick a CPU with more lanes though.

~~~
joshu
It's fine for a single GPU. For multiple GPUs in a deep learning box you want
more.

~~~
Toast_
What cpu would you recommend? Would it be better to go with a 2x cpu mobo?

~~~
en4bz
Definitely not since only 1 socket is usually directly attached to the PCIe
bus. The best CPU for the money at the moment is most likely an i7 5830K with
40 PCIe lanes. Or you could wait (probably the best idea) for AMD to release
it's "threadripper" CPUs tomorrow with 44 lanes and for Intel to release it's
new i7/i9 lineup in 2 weeks, also with 44 lanes.

~~~
Toast_
Cool, thanks for the info.

------
rahimnathwani
I went for a slightly cheaper option ($950), with a slower GPU, less disk, but
more RAM and better PSU and case:

\- Used Dell T7500 (Xeon X5675, 48GB RAM): US$350 including delivery from
ebay: [http://www.ebay.com/itm/Dell-T7500-Home-PC-
Xeon-6-Core-3-06G...](http://www.ebay.com/itm/Dell-T7500-Home-PC-
Xeon-6-Core-3-06GHz-X5675-48GB-RAM-/172642310795)

\- Nvidia GTX1080 8GB (US$480)

\- SSD ($100)

\- USB 3.0 PCIE card ($20)

------
throwaway32421
Considering the ridiculous amounts of time spent on training, how do those of
you deal with 'work' ? Work remotely ? Run multiple experiments at the same
time ?

I'm having an increasingly difficult time explaining to my client why it's
taking so much longer to debug the code as opposed to getting the
implementation off the ground.

------
lowglow
This is 100% the process I've gone through to build a DL box. Anaconda saved
me in he fast.course.ai lessons.

~~~
monkmartinez
How did you like those courses?

~~~
lowglow
It's OK but definitely think we need a bigger picture / organically growing
course material.

------
jacquesm
If the OP reads this, this fragment makes no sense:

> Even thought the GPU is the MVP in deep learning

Also: if you ever do another build: do not take the parts out of their
protective packaging until you need them. A nice little bolt of static from
your fingertips to the motherboard, memory sticks or GPU terminals could ruin
the part. Those bags and the foam are conductive and help to disperse static
energy so better to use them until the last moment.

~~~
quadrature
MVP as in Most Valuable Player.

~~~
jacquesm
Most people would read that abbreviation as 'Minimum Viable Product' in the HN
context.

~~~
monkmartinez
Not me, I totally understood what he meant due to the context. The reality is
that "Minimum Viable Product" was hijacked from the MVP of sports.

~~~
jacquesm
I'm not into sports (unless I'm doing something myself :) ), so I never heard
it used like that before.

~~~
esrauch
Its a pretty common Americanism that is disconnected from sports in my
experience, I suspect the average American would recognize it as
"something/someone that is contributing a lot", and only a tiny percentage
would even recognize "minimum viable product" spelled out.

------
nacc
I have thought about the same thing when playing with deep learning at home,
until I put electricity bills into the equation. A beast like what the author
has built, with full power, will cost $0.1/hour in electricity bill alone at
where I live ... which almost equal to the cost of a p2.xlarge spot instance
on Amazon.

So if someone else is paying for the electricity, build your own gig,
otherwise, Amazon is pretty hard to beat.

~~~
hueving
The spot pricing is pretty crappy to compare to since you only get to use
spare capacity. If you have any project where availability matters then you
should compare to the on demand price of $0.9/hour.

If availability doesn't matter, it means you wouldn't run the box full time
anyway, in which case your power calculation is too high.

~~~
marcosdumay
It's a processing task. You run it for a fixed CPU time. Wall time may or may
not matter depending on the application.

You almost never take availability into account, but taking spot prices will
increase wall time.

------
adsfk32
You can get a 12-core Xeon (evaluation sample) v3 for ~$200.

