
Slide: First algorithm for training deep neural nets faster on CPUs than GPUs - Chirono
https://news.rice.edu/2020/03/02/deep-learning-rethink-overcomes-major-obstacle-in-ai-industry/
======
yayr
source:
[https://github.com/keroro824/HashingDeepLearning](https://github.com/keroro824/HashingDeepLearning)

paper: [https://arxiv.org/abs/1903.03129](https://arxiv.org/abs/1903.03129)

------
m0zg
It's a first algorithm to train _recommenders_ faster on CPUs, not neural nets
in general. And the "CPU" they use are actually 2 CPUs which, even after the
price drop, sell for $4K+ each.

~~~
jimsmart
Yes, but they're benchmarking those 2x $4k CPUs (which you're complaining
about) against a v100 GPU, which costs $7-11k — so where's the problem here?
It's not like they're comparing their $8k/worth of CPU against a consumer GTX
1080.

Furthermore, there's nothing in their paper that seems to indicate that their
new algorithm cannot be generalised to tasks other than recommendation.
(Although admittedly I may have missed something)

~~~
m0zg
No problem whatsoever. It's just that it's a super narrow problem and it
doesn't make GPUs obsolete, which is how people misinterpreting this. In fact
it doesn't even save that much money, even if you use a V100. For fp32 it has
about the same throughput as a consumer GPU at 7-10x the cost.

~~~
jimsmart
The primary discussion thread on this now seems to be here. HTH.

[https://news.ycombinator.com/item?id=22502131](https://news.ycombinator.com/item?id=22502131)

