
Random feedback weights support learning in deep neural networks - plg
http://arxiv.org/abs/1411.0247
======
dthal
This reminds me of something that came up in Andrew Ng's online ML class. He
said that it is important to check the correctness of your gradient
calculation in backprop (by comparing it to a finite difference of the loss)
because if you have a bug there, your algorithm might more or less work
anyway, making it hard to tell that there was a bug. Apparently you can still
get sensible output even with an incorrect gradient.

------
ezy
On reading this, my first question is about the properties of the "random"
feedback matrix. They illustrate what is happening using a tiny 1-width
machine and a "random" matrix of "1". It seems like some analysis needs to be
done on what kind of "random" is most appropriate to replace the gradient
update for larger machines. There could be something really interesting going
on such that you could generate some optimal non-random B according to
whatever the network topology is.

------
fleitz
The implications of this are huge, it should drastically reduce processing
time for neural nets. I wonder if given this if networks could be updated
asynchronously/continuously.

~~~
benanne
I don't really understand how it would reduce processing time, could you
elaborate?

The main implications seem to be for neuroscience, as far as I can tell.
Backprop is considered biologically implausible because it requires either
bidirectional communication over synapses (which doesn't happen) or weight
sharing between neurons. But this allows the forward and backward connections
to be decoupled (i.e. they are different synapses).

This is really interesting stuff, my first reaction was "why does this even
work?" I think I still don't really fully understand what's going on.

~~~
Jonanin
> Backprop is considered biologically implausible

This is not true. See Neural Back propagation [1]. There are known mechanisms
for backwards feedback between neural connections, for example Spike Timing
Dependent Plasticity - where neural inputs that are well correlated in time
and potential to output firings are strengthened over time. These phenomena
are vital to learning and neural development.

[1]
[http://en.m.wikipedia.org/wiki/Neural_backpropagation](http://en.m.wikipedia.org/wiki/Neural_backpropagation)

~~~
Houshalter
Yes but that's not really anything like the backpropagation algorithm in
artificial neural networks.

------
MarkPNeyer
P = NP in the presence of a random oracle.

~~~
Madmallard
keyword: oracle

~~~
sgt101
random oracle - the random thing is what throws me...

