
The Computational Power of Biological Dendritic Trees - lamename
https://arxiv.org/abs/2009.01269
======
roughly
The amount of computational power in biological systems is simply staggering.

In extremely simple organisms like roundworms, there are on the order of
hundreds of neurons; for most insects you're in the 10k-1M range.

A honeybee contains one million neurons, which are computational devices that
we have a hard time fully and accurately mapping, and something like a billion
connections between them.

Each of those neurons contains the entire genome for that honeybee, around 250
million base pairs. Those code for all of the ~thousands of proteins that make
up a honeybee - proteins are made up of sequences of amino acids which arrange
themselves into shapes with different molecular interaction properties.
Figuring out that shape given the amino acid sequence is so computationally
difficult that it spawned the Folding@Home project, which is one of the
largest collections of computing power in the world.

The process of translating from DNA through RNA to a protein is itself
substantially harder than it sounds - spend time with a bioinformatics
textbook at some point to see some of the features of DNA, such as non-coding
regions in the middle of sequences that describe proteins, or sections of RNA
which themselves fold into functional forms.

None of this is even getting down to the molecular level, where the geometry
of the folded proteins allows them to accelerate reactions by millions or
trillions of times, allowing processes which would normally operate at
geological scales to be usable for something with the lifespan of a bacterium.

The most complex systems we've ever devised pale in comparison to even basic
biological systems. You need to start to look at macro-scale systems like the
internet or global shipping networks before you start to see things that
approximate the level of complexity of what you can grow in your garden.

Nature builds things, we're playing with toys.

~~~
SkyMarshal
Do we understand what the fundamental computational operation is in biology
yet? Eg, in computers it's boolean logic, transistors with on/off gates,
representing true/false or 1/0, from which all other more complex logical
operations can be composed.

Does biology have an equivalent? Or is your comment essentially explaining
that it probably does, but is more complex and we don't understand it yet?

~~~
roughly
I'm the Wrong Person to Answer This - I'm a hobbyist and a dilettante, not a
scientist, but here's how I understand it:

At its core, what you're seeing in all of these steps are molecular
interactions - neurons fire to the rate they do because they build up sodium,
potassium, or calcium ions; different chemical signaling chains are what
prompt the transcription of given genes; charge affinities between the amino
acids and with their environment are what create the shapes of proteins and
give them their properties and capabilities. Effectively, each atom - each
electron on each atom - is affecting these interactions, and that's what's
driving the whole system.

At the molecular level, the properties of a molecule are (effectively)
determined by their structure and charge distribution - the atomic composition
of the molecule, where are the electrons likely to congregate, which bonds are
stronger or weaker, and where are atoms likely to be able to be added or
removed. These affect how the molecule reacts with other molecules, and each
reaction that changes either the structure or the charge distribution changes
how the molecule will react going forward.

So, the computation model is effectively a physical/structural one - how do
these structures meet, compare, and combine, played out over trillions of
interactions and interaction chains.

(I'm consciously ignoring the quantum side, because A) I don't understand it
well enough and B) the structure/charge lens seems to be basically
sufficient.)

It's worth taking a look at some of the cell chemical pathways to see how some
of this plays out - take a look at things like the Krebs cycle, which are
basically steps of interactions in which a molecule or several are modified
step by step in a series of splits, joins, adds, and subtractions that allow
for the next step.

Part of what makes this tricky is that, while you can "zoom out" and focus on
larger systems like neurons or genomes, the molecular interaction model shows
up all over the place - neurons fire to the degree they do because of charge
accumulation, DNA & RNA transcription are strongly affected by Weird Molecular
Interactions, protein folding is at least partially a product of charge
affinities, enzymes work as they do because of structure and charges. This is
why a lot of these problems are enormously computationally difficult - it's a
physical system, not a logical one - there's no way to isolate any layer from
any other layer.

(I deeply welcome corrections on any of the above, by the way - I've spent
time reading on all of this, but I'm not a professional by any stretch. The
above is the model I've acquired over time, not reality.)

~~~
SenorTibbs
Not as a correction but more of an into the weeds clarification; neurons at
rest have a pretty stable cytosolic ionic composition, they hover around -80mV
resting potential due to leak channels.

The firing of an action potential, on the other hand, only happens when they
become depolarized enough to reach their threshold potential. If they reach
the threshold (generally due to ligand-receptor binding) then voltage gated
sodium channels open up and the neuron gets flooded with positive charge -
this is the electrical impulse that moves down the axon.

To be honest I'm not someone who knows much about about computer science, but
in terms of a boolean type operator the closest thing that comes to mind is
the threshold potential? It's an all or nothing process, either T or F, and if
it's true then an action potential is generated.

------
gtsnexp
To put it gently, highly reminiscent of:
[https://www.biorxiv.org/content/10.1101/613141v2](https://www.biorxiv.org/content/10.1101/613141v2)

------
ArtWomb
>>> work suggests that popular neuron models may severely underestimate the
computationalpower enabled by the biological fact of nonlinear dendrites and
multiple synapses per pair of neuron

Actually sounds quite significant ;)

~~~
slx26
I don't think they understimate the non-linearity of real neurons. It's simply
a trade-off. Accurate models were unpractical (too computationally expensive)
the last time I checked. There's quite a bit of research on neuron modelling,
both for accurate responses and approximations that are fast to compute. The
wikipedia article [0] is actually a really good read. What's true is that with
the current upscaling of neural networks, it might be worth it to look back at
more sophisticate approximations and see if they are implementable now. But I
guess the problem would not be just training, but the fact that the trained
networks would also be more expensive to run. I don't really know much about
the topic, so if anyone has studied this in more detail and wants to share
something...

[0]
[https://en.wikipedia.org/wiki/Biological_neuron_model](https://en.wikipedia.org/wiki/Biological_neuron_model)

~~~
a-priori
It's tricky to be both reasonably biologically accurate, and yet
computationally efficient. Some models do exist to fit this niche. One I'm
familiar with is Izhikevich (2003), a model with four morphology parameters
and two state variables and corresponding ordinary differential equations. It
can simulate the spiking behaviour of several different kinds of neurons.

[https://www.izhikevich.org/publications/spikes.htm](https://www.izhikevich.org/publications/spikes.htm)

I've written a SIMD-based simulator that can integrate 500k of those neurons
in real-time on a standard laptop. That's a bit of a limit case -- it doesn't
yet model action potentials or learning, which would slow it down quite a bit.
The details here matter a lot: a GPU version I experimented with could do
about 1M, whereas an earlier version of this work that was a straight-up CPU-
based solution (_not_ SIMD) but _with_ action potentials and learning could
only model 1k.

~~~
slx26
That's very interesting, thanks for sharing!

------
dave_sullivan
Call me crazy, but isn’t this “single biological neuron” actually 2 locally
connected layers with a field width of 2 and unshared weights with a third
fully connected layer at the end? With a relu nonlinearity?

I’m not surprised this does well on MNIST and I’m not sure it breaks with
present research directions in deep learning. This network could be built
pretty easily in torch or tensorflow.

~~~
kroggen
By being simple does not mean that it is not insightful

~~~
dave_sullivan
I’m not seeing anything insightful though. They are taking a fairly standard
architecture, claiming it’s something different, and then claiming some
connection to biological neurons.

As I read it, this is an example of people publishing a minor tweak in the
algorithm and then claiming it’s a new biologically inspired abstraction. I
guess I just want more papers about transformers.

------
angusturner
I can't really comment on the novelty of this work, but I don't think the
connectivity structure makes much sense.

I mean, it does in the sense that local pixels are strongly correlated and a
binary tree will captures this. In fact if you add weight-sharing to the
K-tree model you can recover 1D convolution with a stride and kernel of 2.

But is this really the right operation for images? Why fixed kernel of 2? I
think capsules or some other vector-based operation would make more sense.
Perhaps with a learned or dynamic connectivity pattern.

------
kroggen
She made a video presentation at the Brains@Bay Meetup:

[https://youtu.be/40OEn4Gkebc?t=2769](https://youtu.be/40OEn4Gkebc?t=2769)

------
bionhoward
Nice paper, this could motivate more sophisticated ANN models vs multiply-add-
activation paradigm

~~~
cblconfederate
In reality, this itself is an ANN with multiply-add-activation. They basically
model an imaginary dendritic tree as a hierarchy of convolutional networks
which converge to the dendrite trunk. It's however very far from biological
plausibility.

------
29athrowaway
Some people tend to forget that most neural models are an oversimplified
approximation of biological nervous systems.

~~~
chundicus
For early perceptrons, maybe; even that feels like a stretch. Every neural
network model that has been created after that has nearly zero relation to any
sort of biological, cognitive system.

~~~
akyu
I think the designers of many of those models would disagree with you.

~~~
chundicus
Okay, what models?

