In extremely simple organisms like roundworms, there are on the order of hundreds of neurons; for most insects you're in the 10k-1M range.
A honeybee contains one million neurons, which are computational devices that we have a hard time fully and accurately mapping, and something like a billion connections between them.
Each of those neurons contains the entire genome for that honeybee, around 250 million base pairs. Those code for all of the ~thousands of proteins that make up a honeybee - proteins are made up of sequences of amino acids which arrange themselves into shapes with different molecular interaction properties. Figuring out that shape given the amino acid sequence is so computationally difficult that it spawned the Folding@Home project, which is one of the largest collections of computing power in the world.
The process of translating from DNA through RNA to a protein is itself substantially harder than it sounds - spend time with a bioinformatics textbook at some point to see some of the features of DNA, such as non-coding regions in the middle of sequences that describe proteins, or sections of RNA which themselves fold into functional forms.
None of this is even getting down to the molecular level, where the geometry of the folded proteins allows them to accelerate reactions by millions or trillions of times, allowing processes which would normally operate at geological scales to be usable for something with the lifespan of a bacterium.
The most complex systems we've ever devised pale in comparison to even basic biological systems. You need to start to look at macro-scale systems like the internet or global shipping networks before you start to see things that approximate the level of complexity of what you can grow in your garden.
Nature builds things, we're playing with toys.
Does biology have an equivalent? Or is your comment essentially explaining that it probably does, but is more complex and we don't understand it yet?
Evolution is opportunistic and will reach for whatever computational method might be available and easy to fall into.
Great book on AI approach by understanding the human brain first...
At its core, what you're seeing in all of these steps are molecular interactions - neurons fire to the rate they do because they build up sodium, potassium, or calcium ions; different chemical signaling chains are what prompt the transcription of given genes; charge affinities between the amino acids and with their environment are what create the shapes of proteins and give them their properties and capabilities. Effectively, each atom - each electron on each atom - is affecting these interactions, and that's what's driving the whole system.
At the molecular level, the properties of a molecule are (effectively) determined by their structure and charge distribution - the atomic composition of the molecule, where are the electrons likely to congregate, which bonds are stronger or weaker, and where are atoms likely to be able to be added or removed. These affect how the molecule reacts with other molecules, and each reaction that changes either the structure or the charge distribution changes how the molecule will react going forward.
So, the computation model is effectively a physical/structural one - how do these structures meet, compare, and combine, played out over trillions of interactions and interaction chains.
(I'm consciously ignoring the quantum side, because A) I don't understand it well enough and B) the structure/charge lens seems to be basically sufficient.)
It's worth taking a look at some of the cell chemical pathways to see how some of this plays out - take a look at things like the Krebs cycle, which are basically steps of interactions in which a molecule or several are modified step by step in a series of splits, joins, adds, and subtractions that allow for the next step.
Part of what makes this tricky is that, while you can "zoom out" and focus on larger systems like neurons or genomes, the molecular interaction model shows up all over the place - neurons fire to the degree they do because of charge accumulation, DNA & RNA transcription are strongly affected by Weird Molecular Interactions, protein folding is at least partially a product of charge affinities, enzymes work as they do because of structure and charges. This is why a lot of these problems are enormously computationally difficult - it's a physical system, not a logical one - there's no way to isolate any layer from any other layer.
(I deeply welcome corrections on any of the above, by the way - I've spent time reading on all of this, but I'm not a professional by any stretch. The above is the model I've acquired over time, not reality.)
The firing of an action potential, on the other hand, only happens when they become depolarized enough to reach their threshold potential. If they reach the threshold (generally due to ligand-receptor binding) then voltage gated sodium channels open up and the neuron gets flooded with positive charge - this is the electrical impulse that moves down the axon.
To be honest I'm not someone who knows much about about computer science, but in terms of a boolean type operator the closest thing that comes to mind is the threshold potential? It's an all or nothing process, either T or F, and if it's true then an action potential is generated.
And somehow be able to polymorph into something like a human.
I think it's more impressive that it doesn't. Quite a bit is emergent from stochasticity; the encoded parts are just how to guide the stochasticity correctly.
A analogy is building a sierpinski triangle from die rolls and one instruction.
But human is one SINGLE program, running in distributed fashion in about 30 TRILLION cell, that make us operate functionally as one single entity. That is what's impressive. What's more organization of those cell is also handled by the program.
Of course, the debugging process has not been a nice one.
When people talk about "Epigenetic Memories," they're using a bit of a romantic term for changes in gene expression that last beyond a single generation - for instance, a mother experiences a drought or famine that causes changes that can be detected in her offspring.
I am cautions my self not to be romantic about this topic, but at same time neural development is not random structure of cells and arguably brain has predispositions to behaviors. There is possibility that some genetic mutations over single generations are not strictly random, furthermore such genetic mutations specifically evolved over millions of years to introduce changes to environmental pressure. (this is only my speculation of course)
Actually sounds quite significant ;)
I've written a SIMD-based simulator that can integrate 500k of those neurons in real-time on a standard laptop. That's a bit of a limit case -- it doesn't yet model action potentials or learning, which would slow it down quite a bit. The details here matter a lot: a GPU version I experimented with could do about 1M, whereas an earlier version of this work that was a straight-up CPU-based solution (_not_ SIMD) but _with_ action potentials and learning could only model 1k.
I’m not surprised this does well on MNIST and I’m not sure it breaks with present research directions in deep learning. This network could be built pretty easily in torch or tensorflow.
There is a lot of research potential waiting to be discovered (or maybe rediscovered) with sparsity in neural networks, so I am a fan of this area of research. I would even say that sparsity is playing a much bigger role than we realize in the success of many neural architectures that are popular today.
As I read it, this is an example of people publishing a minor tweak in the algorithm and then claiming it’s a new biologically inspired abstraction. I guess I just want more papers about transformers.
I mean, it does in the sense that local pixels are strongly correlated and a binary tree will captures this. In fact if you add weight-sharing to the K-tree model you can recover 1D convolution with a stride and kernel of 2.
But is this really the right operation for images? Why fixed kernel of 2? I think capsules or some other vector-based operation would make more sense. Perhaps with a learned or dynamic connectivity pattern.
The basic mechanic of using wings to generate lift might have been inspired by birds in flight, but a 737 is not a simplified pigeon.
In every system we create, in which we mimic biological systems (whether this is communication, transport, robotic, computational, chemical, etc), the resultant systems in relative terms are extremely simplistic, clunky and inefficient. Our systems may be highly complex as we look at them. But in reality, they are very, very simple in comparison to the original system that we want to mimic. We, at this point, lack the finesse of designing anything comparable. We have a long , long way to go yet.