
Neural Networks with Few Multiplications - skybrian
http://arxiv.org/abs/1510.03009
======
TTPrograms
I've seen lots of alternate approaches to constructing neural nets with
varying operations and one of the common themes is that it doesn't really
matter that much w.r.t. accuracy, so you may as well optimize for performance.
This seems to indicate to me that neural nets are some variation on a yet
unidentified more elegant mathematical structure that might enable better
theoretical understanding about training phenomena. The spin-glass
equivalencies I think further suggest more elegant underlying structures.

Once we understand that underlying structure we might be able to do really
cool things, i.e. identify the nature and size of training data set required
for solving a given problem, or train much, much faster.

------
Nomentatus
Some decades ago, on one of the first IBM PCs 12Mhz, I created a neural
network using static point multiplication (binary shifts) that learned to play
tic tac toe over about a weekend, in C. I probably have some code for it still
lying around.

~~~
xj9
I would be extremely interested in looking at that!

------
eveningcoffee
I would highly recommend this video
[https://www.youtube.com/watch?v=DleXA5ADG78](https://www.youtube.com/watch?v=DleXA5ADG78)
from Geoffrey Hinton in 2012 that I think is related to this research.

------
nine_k
Wow. This opens a way for deep learning on _much_ cheaper and less power-
hungry hardware.

------
ClayFerguson
Does this not represent some "generalizable mechanism" for distributive
elimination of floating point cycles or is this so specific to it's job that
there is no generalization worth considering. My instinct says this may be
somewhat generalizable making it significant in algorithms if it's actually
working.

~~~
_0ffh
Yoshua Bengio is in this, you bet it's real!

------
raverbashing
Interesting, but the paper doesn't mention speed gains

It is definitely promising. Even though fp operations today are "cheap",
integer ones are still faster

