
What is the future of deep learning Hardware? - deepnotderp
So, I was thinking about what the future holds for deep learning hardware. It seems like nvidia dominates, but big challengers like Intel (and Nervana!), AMD , Xilinx, etc. are coming up with their own products. Startups like wave computing and graphcore also appear to be doing interesting stuff.
I&#x27;m very interested in FPGAs for deep learning and even more about ASICs and special chips for deep learning. Are there any papers and&#x2F;or companies you could point me to? What do you guys think about the future of deep learning hardware? I understand that GPUs are already very good because of matrix multipliers and FPUs, but surely an opportunity exists just by lowering precision (for inference mainly, but apparently stochastic rounding works for training also)?
======
petra
The lowest precision is using binary. Look for "binaryConnect" for research.
Also the group behind pulpino did a chip implememtation of binaryConnect with
great results.

Another possibility is the work of Jennifer hassler from Georgia tech on
analog neural networks , including a roadmap of the possibilities.

Those are possibly the most optimal theoretically , but it's not certain that
they'll work.

------
0xc000005
GPUs are a bit silly, even though they work well, since the power consumption
per unit of computation is higher than FPGAs , and orders of magnitude higher
than the human brain. The problem with deep learning is that real brains don't
do multiply-accumulates and so contain no power-sucking floating point
hardware. So there is a lot of room for improvement here.

------
0xc000005
See [https://fas.org/irp/agency/dod/jason/ai-
dod.pdf](https://fas.org/irp/agency/dod/jason/ai-dod.pdf) for more

------
alain94040
The best deep learning stack on FPGA I know:
[http://mipsology.com](http://mipsology.com)

It will be interesting to see the race between FPGAs and GPUs in the next two
years. Both performance and power consumption are going to be improving
significantly.

------
0xc000005
Lowering precision has been done already by several firms.

