
Neurons that fire together, wire together, but how? - Anon84
http://dissociativediaries.com/neurons-that-fire-together-wire-together-ok-but-how/
======
dr_dshiv
Hebb actually talked about causation, not synchrony (firing together):

"When an axon of cell A is near enough to excite cell B and repeatedly or
persistently takes part in firing it, some growth process or metabolic change
takes place in one or both cells such that A ‘s efficiency, as one of the
cells firing B, is increased”

Synchrony is extremely important, particularly for the formation of cortical
columns and neural pruning. But in spike timing dependent plasticity, where
growth is potentiated if the presynaptic fires _just before_ post synaptic
firing, the connection is actually depressed if up and downstream neurons fire
exactly synchronously. (there is a huge amount of variation in this across the
brain, though)

~~~
dr_dshiv
Note that there is also a mechanism for association between two _pre_ synaptic
neurons. Probabilistically, when those upstream neurons fire synchronously,
the downstream neuron will be more likely to actually fire. When that occurs,
the postsynaptic neuron will, as a result of Hebb's Postulate, increase the
connectivity to the synchronously firing neurons. So, "cells that fire
together wire together" is more true of presynaptic neurons than pre-to-post
synaptic neurons (and the wiring together is occurring through the
postsynaptic neuron)

------
vinay427
I find it strange that the author couldn't find this in a textbook. This is
rather common material in a developmental neuroscience textbook or lecture.
I've looked through two such books during my (only) course on the topic, and
all three sources covered this material.

~~~
ramraj07
What did they say ? Perhaps you misread what was in the textbooks? My
understanding is that the authors questions are legitimate and still not fully
answered.

~~~
vinay427
Yep, I agree that there are legitimate questions raised which still lack
answers. However, I'm responding to the claim that the information the author
provides in the summary is not found in any textbook they saw:

> So there you have it, a quick summary of one part of neural connectivity
> I’ve yet to see described in a textbook about the brain, but which really
> should be given out there, along with the classic Hebbian principle

------
throwitawayday
The question of how neurons find each other to connect was recently studied
with experimental connectomics--altering neurons and then mapping their
synaptic circuits with electron microscopy--in this paper by Javier Valdes
Aleman et al. 2019
[https://www.biorxiv.org/content/10.1101/697763v1](https://www.biorxiv.org/content/10.1101/697763v1)
, using Drosophila's somatosensory axons and central interneurons as a model.

If the OP's website Disqus worked (can't ever get the "post" button for
comments after login), the above could have gone straight into the page.

------
g_airborne
The connectedness of neurons in neural nets is usually fixed from the start
(i.e. between layers, or somewhat more complicated in the case CNNs etc). If
we could eliminate this and let neurons "grow" towards each other (like this
article shows), would that enable smaller networks with similar accuracy?
There's some ongoing research to prune weights by finding "subnets" [1] but I
haven't found any method yet where the network grows connections itself. The
only counterpoint I can come up with is that is probably wouldn't generate a
significant performance speed up because it defeats the use of SIMD/matrix
operations on GPUs. Maybe we would need chips that are designed differently to
speed up these self-growing networks?

I'm not an expert on this subject, does anybody have any insights on this?

1\. [https://www.technologyreview.com/2019/05/10/135426/a-new-
way...](https://www.technologyreview.com/2019/05/10/135426/a-new-way-to-build-
tiny-neural-networks-could-create-powerful-ai-on-your-phone/)

~~~
blamestross
(See sibling comment NEAT is awesome)

The only reason we architect ANNs the way we do is optimization of
computation. The bipartite graph structure is optimized for GPU matrix math.
Systems like NEAT have not been used at scale because they are a lot more
expensive to train and to utilize the trained network with. ASICs and FPGAs
have a change to utilize a NEAT generated network in production, but we still
don't have a computer well suited to training a NEAT network.

~~~
Der_Einzige
NEAT just doesn't have good, modern GPU powered implementations.

NEAT would totally be competitive if someone actually gets a version running
in PyTorch/Tensorflow

~~~
jawarner
You may be interested in this implementation [1] which builds the networks
using PyTorch.

[1] [https://github.com/uber-research/PyTorch-NEAT](https://github.com/uber-
research/PyTorch-NEAT)

~~~
blamestross
It uses pytorch (and I'm probably going to use it), but doesn't effectively
leverage a GPU for training.

~~~
jawarner
What do you think is the best way to accomplish this?

~~~
blamestross
You don't. You need a different parallelism model than a GPU provides. It
could work well on machines with very high CPU count, but the speedup on GPUs
is the main reason bipartite graph algorithms have seen such investment.

------
hirenj
Funnily enough over this last weekend, I read a great review on this subject
from earlier this year:

“Synaptic Specificity, Recognition Molecules, and Assembly of Neural Circuits”
by Sanes and Zipursky

[https://doi.org/10.1016/j.cell.2020.04.008](https://doi.org/10.1016/j.cell.2020.04.008)

For me, the hard part has always been understanding how this whole thing is
orchestrated on a cellular and molecular level.

------
dr_dshiv
When Hebb talks about "reverberation" in neural circuits, he still thinks in
advance of our current knowledge of oscillatory neurodynamics. Here he
speculates about the short term memory trace that holds position dynamically,
prior to physical changes in the synapse:

"It might be supposed that the mnemonic trace is a lasting pattern of
reverberatory activity without fixed locus like a cloud formation or eddies in
a millpond"

From Hebb's 1948 "Organization of Behavior"

------
punnerud
Main point: “(..) if the target neuron already has too many connections, it
will tend to remove the weakest ones, and this includes the most recent ones.
The scaling goes both ways after all – it goes for more synapses when it
starts with too few, but for less, if it starts with too many.

But synaptic scaling is not everything. As it turns out, the tips of the
growth cone constantly produce structures called filopodia, and these react to
specific chemical attractants and repellents. These chemicals are produced by
both cells at the target area, and by so-called guidepost cells along the way.
There are suggestions that the system for such targeting is fairly robust,
especially in early development (and its limitations in later life might
explain why spinal cord injuries and the like are so hard to fix).“

------
Mirioron
This makes me want to know how quickly this type of growth happens. Is it on
the order of seconds? Minutes? Hours? Days? Is this why when you learn
something, take a break, come back later and everything makes more sense
happens?

~~~
jcims
I've noticed that there's a weird area when learning a physical skill that
there's a strange growth curve. You suck at first, then quickly get to some
kind of milestone, then get worse before you get better. It feels like my
brain is attempting to delegate some of the motor activity to lower levels
before they are 'ready', but in fact it might be an essential part of training
those neurons.

~~~
mikhailfranco
See _Mastery_ by George Leonard

which is a great book and highly recommended

even if you are not into karate or martial arts.

You have echoed his sketch of punctuated plateaus (p14):

 _The Mastery Curve_

    
    
      There's really no way around it. Learning any new skill 
      involves relatively brief spurts of progress, each of
      which is followed by a slight decline to a plateau
      somewhat higher in most cases than that which preceded it.
    

[pdf] [http://index-of.co.uk/Social-
Interactions/Mastery%20-%20The%...](http://index-of.co.uk/Social-
Interactions/Mastery%20-%20The%20Keys%20To%20Success%20And%20Long-
Term%20Fulfillment%20-%20George%20Leonard.pdf)

~~~
jcims
Will definitely check this out thank you

------
buboard
Growth cones are only relevant during development and in regenerating neurons
which are not common. Everyday neurons do however continuously extend (and
contract) filopodia which may reach nearby axon terminals and eventually form
a synapse, thus causing synaptic rewiring. Synaptic scaling is usually used to
refer to a homeostatic and uniforum up- or down- scaling of synaptic weights
and is not really relevant to rewiring.

~~~
dr_dshiv
What about hippocampal neurogenesis? Those are spewing out at a nearly
constant rate all the time

~~~
buboard
it's still a tiny amount of neurons that are being turned over. About 1.75% of
the dentate gyrus is renewed per year.

~~~
dr_dshiv
But that's the piano roll that is recording our sense of time!

------
greyface-
The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that
Mediates Intercellular RNA Transfer
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5884693/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5884693/)

------
sesuximo
How do neurons fire less?

------
plutonorm
Memory RNA

------
mongojunction
What if something about the electrical signal attracts growth in certain
direction, toward other signals firing at the same time?

Or what if some neuron pairs that are not yet connected share quantum
entangled structures, that if activated simultaneously ... but still how does
direction occur?

What if neurons emit light, that's why you can stimulate them with light...and
what if they can somehow detect the faint light from other neurons and get the
direction the light comes from, and grow towards that?

~~~
dr_dshiv
You are getting downvoted for speculating on Quantum entanglement, but I think
all of your speculations are useful here and to be encouraged.

~~~
mongojunction
Thank you. so you work in this area? what do you think about the software
available for your research? Could it be better or does software not play so
much of a role?

~~~
dr_dshiv
Yes the software in this area is critical -- and there are major challenges.

~~~
mongojunction
If it's not too much trouble could you point me to some resources,
compilations of relevant software? Maybe I could add some value...

------
mikhailfranco
Hebbian Learning is just applying the function:

 _enhance transitive closure on a temporal window_

plus the dual negation, whatever that is

under the space-time corollary of De Morgan's Laws:

 _atrophy atemporal uncorrelated direct connection_

