Hacker News new | past | comments | ask | show | jobs | submit login

To add on top of the tutorial, it's recommended to compile with AVX, SSE and FMA instructions enabled if you are using a modern Intel chipset. It has a pretty big boost for calculations that needs to be done on the CPU.

The pip version of TF does not come with AVX and FMA for some reason, so this is one of perks from compiling from source




It seems like Tensorflow 1.8 with CUDA 9.2 performs up to 37% faster when compared to earlier versions of Tensorflow as described in the post. (link below) http://www.python36.com/benchmark-tensorflow-on-cifar10/


I've been playing around with a couple DL frameworks recently, so was wondering, what is the performance tradeoff between what you mention and PyTorch? Is it significantly different? Because I enjoy the pyhonic "style" of PyTorch way more than the graph creation/precomputation method of TensorFlow.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: