Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Computational Power of Biological Dendritic Trees (arxiv.org)
90 points by lamename on Sept 8, 2020 | hide | past | favorite | 35 comments



The amount of computational power in biological systems is simply staggering.

In extremely simple organisms like roundworms, there are on the order of hundreds of neurons; for most insects you're in the 10k-1M range.

A honeybee contains one million neurons, which are computational devices that we have a hard time fully and accurately mapping, and something like a billion connections between them.

Each of those neurons contains the entire genome for that honeybee, around 250 million base pairs. Those code for all of the ~thousands of proteins that make up a honeybee - proteins are made up of sequences of amino acids which arrange themselves into shapes with different molecular interaction properties. Figuring out that shape given the amino acid sequence is so computationally difficult that it spawned the Folding@Home project, which is one of the largest collections of computing power in the world.

The process of translating from DNA through RNA to a protein is itself substantially harder than it sounds - spend time with a bioinformatics textbook at some point to see some of the features of DNA, such as non-coding regions in the middle of sequences that describe proteins, or sections of RNA which themselves fold into functional forms.

None of this is even getting down to the molecular level, where the geometry of the folded proteins allows them to accelerate reactions by millions or trillions of times, allowing processes which would normally operate at geological scales to be usable for something with the lifespan of a bacterium.

The most complex systems we've ever devised pale in comparison to even basic biological systems. You need to start to look at macro-scale systems like the internet or global shipping networks before you start to see things that approximate the level of complexity of what you can grow in your garden.

Nature builds things, we're playing with toys.


Do we understand what the fundamental computational operation is in biology yet? Eg, in computers it's boolean logic, transistors with on/off gates, representing true/false or 1/0, from which all other more complex logical operations can be composed.

Does biology have an equivalent? Or is your comment essentially explaining that it probably does, but is more complex and we don't understand it yet?


There's no one model. How your immune cells pick their targets, or how your cells know which side is left or right during development. Neither is anything like how neurons work, or blood ion homeostasis is acieved. There are also (I believe) transistor like elements in metalloproteins that gate voltages in elecroactive systems like nitrogen fixation or photosynthesis (though that process is markedly dumber than the immune system).

Evolution is opportunistic and will reach for whatever computational method might be available and easy to fall into.


May I recommend the book; On Intelligence

Great book on AI approach by understanding the human brain first...


I'm the Wrong Person to Answer This - I'm a hobbyist and a dilettante, not a scientist, but here's how I understand it:

At its core, what you're seeing in all of these steps are molecular interactions - neurons fire to the rate they do because they build up sodium, potassium, or calcium ions; different chemical signaling chains are what prompt the transcription of given genes; charge affinities between the amino acids and with their environment are what create the shapes of proteins and give them their properties and capabilities. Effectively, each atom - each electron on each atom - is affecting these interactions, and that's what's driving the whole system.

At the molecular level, the properties of a molecule are (effectively) determined by their structure and charge distribution - the atomic composition of the molecule, where are the electrons likely to congregate, which bonds are stronger or weaker, and where are atoms likely to be able to be added or removed. These affect how the molecule reacts with other molecules, and each reaction that changes either the structure or the charge distribution changes how the molecule will react going forward.

So, the computation model is effectively a physical/structural one - how do these structures meet, compare, and combine, played out over trillions of interactions and interaction chains.

(I'm consciously ignoring the quantum side, because A) I don't understand it well enough and B) the structure/charge lens seems to be basically sufficient.)

It's worth taking a look at some of the cell chemical pathways to see how some of this plays out - take a look at things like the Krebs cycle, which are basically steps of interactions in which a molecule or several are modified step by step in a series of splits, joins, adds, and subtractions that allow for the next step.

Part of what makes this tricky is that, while you can "zoom out" and focus on larger systems like neurons or genomes, the molecular interaction model shows up all over the place - neurons fire to the degree they do because of charge accumulation, DNA & RNA transcription are strongly affected by Weird Molecular Interactions, protein folding is at least partially a product of charge affinities, enzymes work as they do because of structure and charges. This is why a lot of these problems are enormously computationally difficult - it's a physical system, not a logical one - there's no way to isolate any layer from any other layer.

(I deeply welcome corrections on any of the above, by the way - I've spent time reading on all of this, but I'm not a professional by any stretch. The above is the model I've acquired over time, not reality.)


Not as a correction but more of an into the weeds clarification; neurons at rest have a pretty stable cytosolic ionic composition, they hover around -80mV resting potential due to leak channels.

The firing of an action potential, on the other hand, only happens when they become depolarized enough to reach their threshold potential. If they reach the threshold (generally due to ligand-receptor binding) then voltage gated sodium channels open up and the neuron gets flooded with positive charge - this is the electrical impulse that moves down the axon.

To be honest I'm not someone who knows much about about computer science, but in terms of a boolean type operator the closest thing that comes to mind is the threshold potential? It's an all or nothing process, either T or F, and if it's true then an action potential is generated.


In terms of neural computation modeling or in terms of morphogenetic models that are fully distributed, we are not even close, in my non-expert opinion.


The most impressive thing is that every single cell has to carry the entirety of the instruction.

And somehow be able to polymorph into something like a human.


> every single cell has to carry the entirety of the instruction.

I think it's more impressive that it doesn't. Quite a bit is emergent from stochasticity; the encoded parts are just how to guide the stochasticity correctly.

A analogy is building a sierpinski triangle from die rolls and one instruction.


Still the same. We regularly compress stuff before sending it out and make programs work in multiple different environments, it could choose to install part of it on this machine and then another part because of some env variables, work almost the same way.

But human is one SINGLE program, running in distributed fashion in about 30 TRILLION cell, that make us operate functionally as one single entity. That is what's impressive. What's more organization of those cell is also handled by the program.

Of course, the debugging process has not been a nice one.


That's not true. Your immune system actively randomizes its dna and reprograms itself stocastically, followed by a system of natural selection to pick winners. There is generally speaking not reproducibility, it will not "work almost the same way" between two organisms, given identical input DNA from birth.


There is also evidence of Epigenetic Memories, memories that are genetically encoded and passed from generation to generation.


So my understanding of epigenetic changes generally is that they're changes in gene expression that last beyond a single cell generation, and sometimes beyond a single organism generation.

When people talk about "Epigenetic Memories," they're using a bit of a romantic term for changes in gene expression that last beyond a single generation - for instance, a mother experiences a drought or famine that causes changes that can be detected in her offspring.


This is interesting article; https://www.scientificamerican.com/article/fearful-memories-... and it is a reason I said some evidence.

I am cautions my self not to be romantic about this topic, but at same time neural development is not random structure of cells and arguably brain has predispositions to behaviors. There is possibility that some genetic mutations over single generations are not strictly random, furthermore such genetic mutations specifically evolved over millions of years to introduce changes to environmental pressure. (this is only my speculation of course)


To put it gently, highly reminiscent of: https://www.biorxiv.org/content/10.1101/613141v2


>>> work suggests that popular neuron models may severely underestimate the computationalpower enabled by the biological fact of nonlinear dendrites and multiple synapses per pair of neuron

Actually sounds quite significant ;)


I don't think they understimate the non-linearity of real neurons. It's simply a trade-off. Accurate models were unpractical (too computationally expensive) the last time I checked. There's quite a bit of research on neuron modelling, both for accurate responses and approximations that are fast to compute. The wikipedia article [0] is actually a really good read. What's true is that with the current upscaling of neural networks, it might be worth it to look back at more sophisticate approximations and see if they are implementable now. But I guess the problem would not be just training, but the fact that the trained networks would also be more expensive to run. I don't really know much about the topic, so if anyone has studied this in more detail and wants to share something...

[0] https://en.wikipedia.org/wiki/Biological_neuron_model


It's tricky to be both reasonably biologically accurate, and yet computationally efficient. Some models do exist to fit this niche. One I'm familiar with is Izhikevich (2003), a model with four morphology parameters and two state variables and corresponding ordinary differential equations. It can simulate the spiking behaviour of several different kinds of neurons.

https://www.izhikevich.org/publications/spikes.htm

I've written a SIMD-based simulator that can integrate 500k of those neurons in real-time on a standard laptop. That's a bit of a limit case -- it doesn't yet model action potentials or learning, which would slow it down quite a bit. The details here matter a lot: a GPU version I experimented with could do about 1M, whereas an earlier version of this work that was a straight-up CPU-based solution (_not_ SIMD) but _with_ action potentials and learning could only model 1k.


That's very interesting, thanks for sharing!


Call me crazy, but isn’t this “single biological neuron” actually 2 locally connected layers with a field width of 2 and unshared weights with a third fully connected layer at the end? With a relu nonlinearity?

I’m not surprised this does well on MNIST and I’m not sure it breaks with present research directions in deep learning. This network could be built pretty easily in torch or tensorflow.


Yes, I stopped reading as soon as I found they used gradient descent. Our brains does not uses gradient descent to learn. If you use any sufficient complex model, and fit MNIST by defining a loss and minimizing it directly, it will very likely work. Even simple linear model of logistic regression on 784 pixels gives good accuracy of around 90%.


A simpler description would be a sparsely connected network, with a tree shaped connectivity pattern.

There is a lot of research potential waiting to be discovered (or maybe rediscovered) with sparsity in neural networks, so I am a fan of this area of research. I would even say that sparsity is playing a much bigger role than we realize in the success of many neural architectures that are popular today.


By being simple does not mean that it is not insightful


I’m not seeing anything insightful though. They are taking a fairly standard architecture, claiming it’s something different, and then claiming some connection to biological neurons.

As I read it, this is an example of people publishing a minor tweak in the algorithm and then claiming it’s a new biologically inspired abstraction. I guess I just want more papers about transformers.


I can't really comment on the novelty of this work, but I don't think the connectivity structure makes much sense.

I mean, it does in the sense that local pixels are strongly correlated and a binary tree will captures this. In fact if you add weight-sharing to the K-tree model you can recover 1D convolution with a stride and kernel of 2.

But is this really the right operation for images? Why fixed kernel of 2? I think capsules or some other vector-based operation would make more sense. Perhaps with a learned or dynamic connectivity pattern.


She made a video presentation at the Brains@Bay Meetup:

https://youtu.be/40OEn4Gkebc?t=2769


Nice paper, this could motivate more sophisticated ANN models vs multiply-add-activation paradigm


In reality, this itself is an ANN with multiply-add-activation. They basically model an imaginary dendritic tree as a hierarchy of convolutional networks which converge to the dendrite trunk. It's however very far from biological plausibility.


Some people tend to forget that most neural models are an oversimplified approximation of biological nervous systems.


Ehhhh, maybe early on, but AFAIK most recent work w.r.t neural networks, except when explicitly trying to mimic actual neurons, isn't really trying to approximate or simplify anything.

The basic mechanic of using wings to generate lift might have been inspired by birds in flight, but a 737 is not a simplified pigeon.


I agree. A 737 is a shadow of a cardboard cutout of a paper origami model of a simplified pigeon. We have barely scratched the surface of what biological systems can do, especially in the dynamic movement and shape changing aspects as well as the energy efficiency of computational and control systems that are involved in biological systems.

In every system we create, in which we mimic biological systems (whether this is communication, transport, robotic, computational, chemical, etc), the resultant systems in relative terms are extremely simplistic, clunky and inefficient. Our systems may be highly complex as we look at them. But in reality, they are very, very simple in comparison to the original system that we want to mimic. We, at this point, lack the finesse of designing anything comparable. We have a long , long way to go yet.


For early perceptrons, maybe; even that feels like a stretch. Every neural network model that has been created after that has nearly zero relation to any sort of biological, cognitive system.


The neocognitron and convolutional neural networks were inspired on experiments on cats performed by Hubel and Wiesel.


I think the designers of many of those models would disagree with you.


Okay, what models?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: