
New Optimizations Improve Deep Learning Frameworks for CPUs - rbanffy
https://www.nextplatform.com/2017/10/13/new-optimizations-improve-deep-learning-frameworks-cpus/
======
Eridrus
This seems like a good study in how to present misleading benchmark numbers.

They have a 72x improvement, great! So, first start with a really crappy
baseline not tuned for your high end 68 core machine - which runs slower than
the 22 core machine so that you have a tonne of space to improve.

Then pump up the batch size to 2048 images per batch because it's an easy know
to twiddle that has nothing to do with the implementation speed and you don't
really care about the actual model accuracy and you just want to prove how
fast you can make this. (I'm ignoring the recent work on making large batch
training work, but the larger point is that batch size is a model
hyperparameter that influences more than just speed)

And then ignore the fact that it's performance per $ that people care about,
not pure speed.

~~~
scottlegrand2
Benchmarksmanship at its finest...

------
naturalgradient
What a bad, outdated article. I don't understand how articles from this
website keep appearing on the first page here, they are usually badly
researched puff pieces.

Aside from the nonsensical performance numbers:

Theano, Neon, Torch, really?

No mention of PyTorch, Theano actually being discontinued..

------
antorsae
And the least expensive Intel Xeon SP Platinum you can buy is ~$3000. No
wonder why there's not a head-to-head benchmark on a cost basis, e.g.
epochs/min per $.

------
afsina
This seemed like an Intel ad.

~~~
rocqua
And a badly written one at that. It stinks of trying to make Intel look good,
emphasis on trying.

------
ris
> About the Author: ...Reinders was most recently the parallel programming
> model architect for Intel’s HPC business...

------
senatorobama
Is Wave Computing shipping yet?

