
Study reveals how, when a synapse strengthens, its neighbors weaken - laurex
http://news.mit.edu/2018/mit-scientists-discover-fundamental-rule-of-brain-plasticity-0622
======
alexbeloi
My naive take on this is that it makes biologically sense to keep net expected
electrical impulses approximately the same before and after strengthening. At
the end of the day the brain has energy constraints.

This can-be/is done functionally in ANNs but achieves a different end (avoids
over-fitting) but doesn't reducing energy(compute) expenditure in dense ANNs
since activation and non-activation is computed in expectation and take the
same number of cycles in dense networks.

I'd love to see more work on massive sparse networks, where you actually get
compute efficiency if you can reduce number of activation without reducing
hurting your optimization target.

~~~
im3w1l
And apart from speeding things up, it could also have a regularizing effect.

------
nabla9
In deep learning this is called local response normalization (LRN).

ConvNets use LRN where most active neurons inhibit other neurons at the same
location in neighboring feature maps.
[http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf](http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf)

~~~
adityab
Actually the new finding is similar to weight normalisation for individual
neurons. LRN would be similar to another thing called lateral inhibition.

------
jxub
The naive conclusion is to avoid learning in any form in order to keep all the
synapses in optimal state. Rather disappointing.

~~~
drak0n1c
Regardless of one’s general level of intelligence, there is that age-old adage
of a trade-off between book smarts and street smarts, or any other niche
skill. Forgetting, and the resulting ignorance, is part of learning. Perhaps
that’s why people in modern societies feel unhappier than those in poorer,
less developed societies.

Still, any society has certain forms of learning with outsized benefits that
make them worthwhile.

~~~
azeirah
I remember reading a paper a little while ago that stated that experts are
better at their craft because they're better at ignoring unimportant and
irrelevant input.

------
zerostar07
... and a model predicting and explaining this a decade ago:
[https://www.ncbi.nlm.nih.gov/pubmed/18602704](https://www.ncbi.nlm.nih.gov/pubmed/18602704)

------
tunnuz
I wonder if the progress in artificial neural network technology is going to
be correlated to discoveries on real neural networks.

~~~
amelius
I'm still wondering about the natural equivalent of back-propagation.

~~~
no_identd
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3673183/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3673183/)

Citations here:

[https://scholar.google.com/scholar?cites=6395717759743355511](https://scholar.google.com/scholar?cites=6395717759743355511)

~~~
p1esk
I think the term "backpropagation" has a different meaning in neurobiology.

------
raincom
Here is the paper:
[http://science.sciencemag.org/content/360/6395/1349](http://science.sciencemag.org/content/360/6395/1349)

------
manmal
So this is how Eg brainwashing works? Repeating a message strengthens the new
pathways until the old connections are too weak to fire?

~~~
leeeeech
Arguably, a _single_ message would only affect a single pathway.

~~~
manmal
AFAIK a single message causes a giant electrical and chemical thunderstorm in
the brain, not only one excited single pathway?

~~~
leeeeech
You are right of course. But because a trivial _message_ has many "layers" of
"messages", according to semantic theory, it's often a single lie in a whole
message packet.

------
pmalynin
So basically L2 regularization.

~~~
p1esk
Not really. This is a local inhibition, and it's not clear if there's penalty
on synapse growth.

------
ridgeguy
How interesting.

This looks like a lateral inhibition effect running on biochemical means that
are persistent compared to the transient electrochemical phenomena of classic
neuronal lateral inhibition. The latter does lots of signal processing in
visual and other systems. [1]

[1]
[https://en.wikipedia.org/wiki/Lateral_inhibition](https://en.wikipedia.org/wiki/Lateral_inhibition)

------
audiolion
We are born with a ton of synapses as babies and through a process called
pruning, many synapses die off, unused branches.

This seems pretty normal to me

------
jchook
Really curious what the biology-centric AI folks have to say about this, e.g.
Jeff Hawkins.

~~~
rytill
Normalization of a layer’s outputs kind of does this already.

------
p1esk
So it's similar to "lateral inhibition", but for synapses.

------
bsenftner
Isn't that kinda the point? It's called learning.

------
techsin101
Does this mean addiction can completely be cured

------
agumonkey
self pruning mechanism ?

