Hacker News new | past | comments | ask | show | jobs | submit login

Wait, why can't you? Backprop is basically adjusting weights to get the model closer to making correct predictions. Isn't that basically how the brain does it? (Though I'm sure the way the brain adjusts its weights uses a different approximation of differentiation)



Backprop requires global communication but biological brain works with local communication in neural assemblies, where signals spread in a way somewhat similar to diffusion.

There is no global “teaching signal” or “delta rule” error correction. Learning via reward and punishment is a wrong level of abstraction for fundamental cognitive tasks like visual object recognition or “parsing” auditory signal.

Sometime people mention dopamine as a kind of reinforcement signal but it operates on completely different time scale, orders of magnitude slower than any iterative optimization model would require.

And the energy and time spent on iterative optimization in ANNs is not available to living organisms with constrained resources.

If you’re interested in authoritative opinion on what kind of learning is biologically plausible see e.g. prof. Edmund Rolls recent book called “Brain Computations”.


Backprop is not simply about adjusting the weights (as a matter of fact, you can argue that any training method is about adjusting the weights).

It's about computing the amount by which you adjust the weight.

And unless there's been a major development in neuroscience that I'm not aware of, backprop is not the way the brain does it.


I think the main issues are:

- there's not a "correct" training prediction

- there's not a "final layer"




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: