"When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A ‘s efficiency, as one of the cells firing B, is increased”
Synchrony is extremely important, particularly for the formation of cortical columns and neural pruning.
But in spike timing dependent plasticity, where growth is potentiated if the presynaptic fires just before post synaptic firing, the connection is actually depressed if up and downstream neurons fire exactly synchronously. (there is a huge amount of variation in this across the brain, though)
> So there you have it, a quick summary of one part of neural connectivity I’ve yet to see described in a textbook about the brain, but which really should be given out there, along with the classic Hebbian principle
If the OP's website Disqus worked (can't ever get the "post" button for comments after login), the above could have gone straight into the page.
I'm not an expert on this subject, does anybody have any insights on this?
My conclusion was I could easily set >99% of weights to zero on my (fully connected) layers with minimal performance impact after enough training, but the training time went up a lot (effectively after removing a bunch of connections, you have to do more training before removing more), and inference speed wasn't really improved because sparse matrices are sloooow.
Overall, while it works out for biology, I don't think it will work for silicon.
It is based on NEAT (as other commenters mentioned) and also ties in some discussion of the Lottery Ticket Hypothesis as you mentioned.
The only reason we architect ANNs the way we do is optimization of computation. The bipartite graph structure is optimized for GPU matrix math. Systems like NEAT have not been used at scale because they are a lot more expensive to train and to utilize the trained network with. ASICs and FPGAs have a change to utilize a NEAT generated network in production, but we still don't have a computer well suited to training a NEAT network.
NEAT would totally be competitive if someone actually gets a version running in PyTorch/Tensorflow
“Synaptic Specificity, Recognition Molecules, and Assembly of Neural Circuits” by Sanes and Zipursky
For me, the hard part has always been understanding how this whole thing is orchestrated on a cellular and molecular level.
"It might be supposed that the mnemonic trace is a lasting pattern of reverberatory activity without fixed locus like a cloud formation or eddies in a millpond"
From Hebb's 1948 "Organization of Behavior"
But synaptic scaling is not everything. As it turns out, the tips of the growth cone constantly produce structures called filopodia, and these react to specific chemical attractants and repellents. These chemicals are produced by both cells at the target area, and by so-called guidepost cells along the way. There are suggestions that the system for such targeting is fairly robust, especially in early development (and its limitations in later life might explain why spinal cord injuries and the like are so hard to fix).“
which is a great book and highly recommended
even if you are not into karate or martial arts.
You have echoed his sketch of punctuated plateaus (p14):
The Mastery Curve
There's really no way around it. Learning any new skill
involves relatively brief spurts of progress, each of
which is followed by a slight decline to a plateau
somewhat higher in most cases than that which preceded it.
* You never enter the same room twice.
* Your brain partially re-wires every time you sleep.
* Your brain rewires, but the way it rewires is surprisingly predictable and we can track the dynamics.
* Your brain is rewiring literally every second, but not every rewiring is functional - does this imply an implicit robustness?
Or what if some neuron pairs that are not yet connected share quantum entangled structures, that if activated simultaneously ... but still how does direction occur?
What if neurons emit light, that's why you can stimulate them with light...and what if they can somehow detect the faint light from other neurons and get the direction the light comes from, and grow towards that?
enhance transitive closure on a temporal window
plus the dual negation, whatever that is
under the space-time corollary of De Morgan's Laws:
atrophy atemporal uncorrelated direct connection