
Mixed Precision Training for Deep Learning Models - snarang
https://arxiv.org/abs/1710.03740
======
gdiamos
Here we show that over 15 large scale deep neural networks can be trained in
mixed IEEE float16 (multiplication) + IEEE float32 (addition) with no loss in
accuracy.

We are happy to answer questions about this work.

