
Computational Power Found in the Arms of Neurons - reubenswartz
https://www.quantamagazine.org/neural-dendrites-reveal-their-computational-power-20200114/
======
bonoboTP
Knowing little about neuroscience outside of biology class, I've always
wondered why neurons would be so simple as to just compute weighted sums and
thresholds.

I mean there are unicellular creatures that can do incredibly complex things,
and neurons, as all cells, are ultimately the descendants of such unicellular
organisms, "teamed up" into a multi-cellular creature. They have a lot of
internal structure, organelles, whatnot. Modeling them as some simple switches
seems a stretch.

My point is, overall this doesn't seem so surprising to me as a layman. (Which
doesn't mean more than just literally that. I'm not claiming laypeople should
decide scientific questions from their armchairs.)

~~~
svara
This is an intuition that many neuroscientists have shared, it's just that
direct experimental evidence of 1) it happening at all in a non-trivial way
and 2) it being relevant to the behavior of an animal is somewhat scarce.

Plus, the sum and threshold model actually gets you pretty far, as evidenced
by artificial neural networks, so there might be some resistance to adding
complexity that might not be necessary.

Edit: Google "dendritic computation" if you want to dig deeper.

~~~
sansnomme
Also: spiking neural networks. Doing useful stuff with them has been a work-
in-progress so far though.

~~~
nravic
Yeah, spiking neural networks are a tough nut to crack. Check out Nengo if
you're interested in learning more

------
orbifold
For those of you that want to be horrified / amused by code, check out the
materials and methods of the article itself. It contains a link to the full
source code used to generate the modeling figures:

[https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model...](https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model=254217&file=/GidonEtAl2020_fig3andS9/_mod/Traub.mod#tabs-1)

The simulator used is called Neuron. Code for it is written in a custom
language called Hoc and models are implemented in yet another domain specific
language. Hardcoded parameters galore including whole sections of code that
are prefaced with: this has been copied from this other publication.

[https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model...](https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model=254217&file=/GidonEtAl2020_fig3andS9/_mod/Traub.mod#tabs-2)

The dendritic tree geometry is specified as hardcoded set of point and branch
declarations

[https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model...](https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model=254217&file=/GidonEtAl2020_fig3andS9/_morph/171122005.hoc#tabs-2)

(apparently that morphology was reconstructed from imaging data?)

~~~
mattmanser
Why horrified? A quick scan and it's actually nicely laid out and indented,
split into logical sections and commented.

Without wanting to spend a lot of time on it, why is it horrific?

There's nothing intrinsically wrong with hardcoded values as they're
biological constants, and nicely identified by the standard convention of
using ALL_CAPS_VARIABLES.

Edit: Scanning a bit more, he's probably referring to this file, which is
pretty horrific:

[https://github.com/ModelDBRepository/254217/blob/master/_mor...](https://github.com/ModelDBRepository/254217/blob/master/_morph/171122005.hoc)

------
eismcc
As the saying goes, “if a brain were easy enough to understand, we’d be too
stupid to understand it.”

------
api
As someone who studied neurobiology more deeply than most CS people I've been
talking into the wind for a long time that neurons are not point objects that
can be simply modeled by equations. They are organisms with incredibly complex
_behaviors_ and internal gene regulatory structures and other information
processing capabilities.

Neural networks are coarse grained models of large scale brain structure. The
fact that these models can be taught to do very interesting brain-like things
(especially pattern recognition) demonstrates that this structure is important
and fundamental, but that doesn't mean it's the whole picture of what's going
on in the brain.

I say talking into the wind because in my experience most CS people tend to
hand-wave away biology at the cellular level and below. There's this zealotry
around us being "almost there" with AGI and the rest that blinds people to the
real magnitude of the problem and how much is really happening in the brain.
This likely includes a whole lot that we haven't even started to really
understand.

~~~
bordercases
Amen.

------
deckar01
> It may also prompt some computer scientists to reappraise strategies for
> artificial neural networks, which have traditionally been built based on a
> view of neurons as simple, unintelligent switches.

It has been suggested that neural networks have diverged from biological
principles to the point where biological research does not provide any useful
improvement to current machine learning techniques. Back propagation of errors
was a major advancement in the 1970s, and it was developed purely based on
mathematical principles like statistics, differential equations, and calculus.
As far as I have read, error back propagation is not how biological NNs learn,
and it seems to be a much more efficient strategy. Biology seems much more
brute force in comparison.

~~~
gambler
_> Biology seems much more brute force in comparison._

A singly honeybee with its billion synapses can adapt to a variety of
situations and environments, learn on-the-fly, perform complex sequences of
tasks, cooperate and communicate with other bees. All of these capabilities
are emergent and packed in its tiny head.

State of the art artificial neural networks (ranging beyond billions of
parameters now) only do the thing they're specifically built for, only after
training with bazillion specific examples and consume tons of energy while
doing so.

Which one of these sounds like the brute force approach?

~~~
deckar01
The majority of behavior in organisms like bees is instinctual, not learned in
their lifetime. That training required millions of billions of trials over the
course of hundreds of millions of years.

~~~
gambler
_> The majority of behavior in organisms like bees is instinctual, not learned
in their lifetime._

It doesn't matter whether their behavior is considered "instinctual". What
matters is that they can quickly adapt their behavior to entirely novel
scenarios:

[https://science.sciencemag.org/content/355/6327/833](https://science.sciencemag.org/content/355/6327/833)

------
tudorw
“Brains may be far more complicated than we think,” as said by anyone who has
worked with brains! Good stuff though, very interesting :)

~~~
prox
As said by a brain studying brains

~~~
tudorw
Don't get me started, if a complex brain can finally understand the brain,
does that not mean we will still be missing the special sauce that was the
cause of the complex comprehension, er, I'm out!

~~~
prox
This is why in some philosophies, there is a big difference between
experiential knowledge and awareness and empirical ‘measurement’ knowledge.

------
RocketSyntax
Ah, it's spiking edges as opposed to spiking node activations. I like this
because it applies to 1 of many connections uniquely, rather than dividing the
entire graph by spiking neurons. Exponentially more edges than nodes in a
dense net. Connections are always more important than entities.

```The dendrites generated local spikes, had their own nonlinear input-output
curves and had their own activation thresholds, distinct from those of the
neuron as a whole```

How would this work in practice? Apply activations to multiplication values of
the weight, or just don't perform the multiplication if the activation of the
node is low?

~~~
AstralStorm
Change the simulation unit from neurons to dendrites. Miss different kinds of
gate and nonlinear activations.

Which is what's being done lately, but then we know next to nothing about the
topology of real neural nets nor electrochemical communication inside a
neuron.

------
haffi112
I wonder if this is also the case for neurons in other animals. Probably it
is, but it would be a stunning revelation if it is not the case.

~~~
chmod775
From the top reddit comment:

> This paper is amazing. What is missing from the description above is that
> this is the first example of how human neurons are qualitatively different
> than rodent neurons (not only more computation power, but categorically
> different computation).

> ELI5: the way the biological human neuron implements XOR is by a formerly
> unknown type of local response to inputs, which is low below the threshold,
> maximal at the threshold and decreases as the input intensifies above the
> threshold. We never saw anything like that in any other animal. (link to the
> relevant figure from the paper)

Rodents being presumably the go-to non-human object of study.

~~~
dharma1
Could you link the Reddit comment please?

~~~
chmod775
Oh. The link was changed on HN.

Here is the reddit comment:
[https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_s...](https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_single_biological_neuron_can_compute_xor/fcx1ocb/)

------
mrow84
OT, but "They obtained slices of brain tissue from layers 2 and 3 of the human
cortex" has a slightly uncomfortable air to it.

~~~
leemailll
[https://www.youtube.com/watch?v=Zj3RxtJ_Ljc](https://www.youtube.com/watch?v=Zj3RxtJ_Ljc)

------
PinkMilkshake
The neurons in the Creatures video games come to mind.

    
    
      In Creatures Evolution Engine games such as Creatures 3, each neuron and dendrite is a fully functional register machine.
    

[https://creatures.wiki/Brain#SVRules](https://creatures.wiki/Brain#SVRules)

------
neural_thing
I have written a short, clear, and dense book about this exact subject. It's
free! Read it here:

[http://www.corticalcircuitry.com/](http://www.corticalcircuitry.com/)

~~~
neural_thing
A preprint that came out after I released the book that I would have
mentioned:

[https://www.biorxiv.org/content/10.1101/613141v1.full](https://www.biorxiv.org/content/10.1101/613141v1.full)

They model a single cortical neuron as a deep neural network with 7 hidden
layers consisting of 128 hidden units each.

Biological neurons are A LOT more powerful than "neurons" in neural networks.
If you hear a claim about computer-brain parity being close - the people
making it almost certainly don't understand the power of cortical neurons.

------
whatshisface
Question for ML people: would backpropagation work if some of the neurons had
nonmonotonic activation functions (for example exp(-x²)), or would the
gradient descent get stuck on one side or the other?

~~~
cyorir
Gradient descent should have no issues with a function like exp(-x^2).
Actually, softmax (softmax(x_i) = exp(x_i)/sum_j(exp(x_j))) is sometimes used
as an activation function. It could make sense to modify the softmax function
to use -x^2 in place of x, for some use case. However, it doesn't always make
sense as a drop-in replacement for other activation functions like ReLu or
Sigmoid. It really depends on your use case.

------
Galaxeblaffer
Anybody know where these super neurons are supposed to be located in the brain
? Are they exclusively located in the Neo Cortex ?

~~~
buboard
neocortex, from the anterior temporal lobes of epileptic patients . This kind
of dendritic response has not been found the dendrites of other mammals so
far. it's not a new type of neuron

------
ASlave2Gravity
So does this point to us being biological machines? Is it already accepted
that we are biological machines?

~~~
valvar
You'll have to be more specific with what you mean by "machines". In what ways
would machines be different from non-machines?

~~~
ASlave2Gravity
That's a good question.

I guess I'd say a Turing machine is my definition of a machine.

And I suppose the implication is that if we are machines, operating on a
stream of input, does that make us deterministic?

EDIT: What I'm ultimately interested in is if we, as humans, are operating in
a realm that logic cannot operate in. If that makes any sense? I'll probably
have to define realm!

~~~
K0SM0S
You could argue that "man-made machines", "biological machines" (all of life,
animal, vegetal and fungus) and "simple matter machines" (like stars are
engines or telluric planets are combustion heaters), even the "surface natural
ecosystem of Earth", are all different _material implementations_ of machines:
organized systems. Carbon-based, copper-based, hydrogen-based... you might map
the whole periodic table of elements minus the rare column perhaps.

Ultimately you'd find a common unicity — like electrical charges, strong/weak
force, arrangements of "gates" and "structure" etc. — but used in different
ways (for instance iirc charged clouds of gas in space don't form new stars
because gravity at close range is weaker than the charge that repels them,
only strong enough at larger distances to hold them together; that's
dramatically different from the use of charges in biological cells).

The cosmos itself is but a big machine. Who's to say we (I mean the whole
planet, maybe star system, maybe galaxy itself) aren't actually just a single
"cell" of the cosmos? That we are part of a much bigger machine, that we are
like those processing units on dendrites in the article, if the universe is a
big brain of sorts?[1]

These are all _types_ of machines (X. process of information; Y. engines to
convert energy; Z. structures that "restrict" "flow" like gates, transistors,
cell membranes, dendrites; N...), which apparently may be expressed,
materially implemented in different ways.

To take your question about logic.

If you mean the universal logic exposed above, I don't think so personally —
merely because there's no evidence for it whatsoever, no observation; and
there's also no need if we accept that from such "machinery" complexity may
emerge complex systems (e.g. humans).

If you mean logic from _within_ the human mind — and this begs the question of
whether maths and physics are "invented" or "discovered" in the background —
then we must assume a subjective answer, at best an aggregate of "non-
disproved facts" that we can all agree on within normalcy (edge cases helping
us understand said average norm).

"Logic" is but one of several operating modes. Are we more than that? Most
certainly yes. That's the experience of all of us. Because perception is so
centered on thought, we tend to overappreciate the importance of thinking in
our lives, in our behaviors and even values; but the relative picture
gradually revealed by biology and psychology and sociology and economics etc.
is that we are _mostly irrational_ , _mostly automatic_ (trained habits,
patterns recognition, educated intuition, etc), and actually very little in
the way of "logical behaviors units".

I'll let you ponder the discrepancy from an absolutely (perhaps) low-level
logical machine to the possibility of a chaotic non-logical emergent high-
level behavior. What it means for life, for AI, and possibly much bigger or
smaller things we consider "inert", for now.

[1]: I mean, look at the larger structures of the universe, tell me it doesn't
look like a sample from a biological tissue...
[http://cosmology.com/GalacticWalls.html](http://cosmology.com/GalacticWalls.html)
(figure 6:
[http://cosmology.com/images/darkmatterdistribution.jpg](http://cosmology.com/images/darkmatterdistribution.jpg))

~~~
ASlave2Gravity
> Because perception is so centered on thought, we tend to overappreciate the
> importance of thinking in our lives, in our behaviors and even values

Very true. Thought can become our master easily. As a programmer, I often find
myself struggling to get out of 'logic' mode or 'thinking' mode.

> I'll let you ponder the discrepancy from an absolutely (perhaps) low-level
> logical machine to the possibility of a chaotic non-logical emergent high-
> level behavior. What it means for life, for AI, and possibly much bigger or
> smaller things we consider "inert", for now.

Wonderfully put! I think the ideas of emergent behaviours are a saving grace
for the incessant desire of ours to reduce everything to its atomic (in the
sense of an atomic operation, not an atom) components. The emergent
behaviours, _can 't_ be predicted. They can't be algorithmized. Becuase to
build algorithms one must go down to those atomic operations.

My wider concern is that we are currently obsessed with analytical thought and
philosophy and that the continental philosophy has become a 2nd class citizen.
I think this has happened becuase analytical thought lends itself to being
algorithmized, whereas continental philosophy does not.

Sorry, that was a bit of a tangent!

Yes! Love the pictures! I have had the same thought/feeling when I first saw
them too!

~~~
K0SM0S
> I think the ideas of emergent behaviours are a saving grace for the
> incessant desire of ours to reduce everything to its atomic (in the sense of
> an atomic operation, not an atom) components.

Very well said! Humans think a lot in terms of dichotomies (here the
micro/macro, or whichever scale you want to consider), but I've heard
biologists and physicists explaining that the closest to fundamental functions
is closer to _e_ (natural log), _sinus_ (and hyperbolic stuff), the notion of
"binary polarity" is very specific and quite limited — look at the Standard
Model, it's clearly more complicated.

I personally think that's how the dimension of time emerges in our perception
(any perfect observer): periodicity in all phenomena (but with e.g. Fourier it
gets extremely complex at emergent thresholds), the arrow of time (non-
periodicity is basically high entropy, 'heat death'/homogeneity).

Cue any such human-driven dichotomy (I think this one is relatively easy: the
deepest processing system is probably close to "emotions", and these at the
lowest level work in terms of "rather good" and "rather bad" (different sub-
regions of the brain) + some "general integration" (third region). And what do
you know, we tend to be heavily polarized in general, it's like most people
feel reassured (familiarity) when things are explained in terms of black and
white, left and right, ones and zeros.

(Just my thoughts on it.)

> My wider concern is that we are currently obsessed with analytical thought
> and philosophy and that the continental philosophy has become a 2nd class
> citizen. I think this has happened becuase analytical thought lends itself
> to being algorithmized, whereas continental philosophy does not.

Strongly agreed. I think we're witnessing a kind of "revival" through many
fields (some spiritual, some historical). Things are moving. I think dumb
things like numbers also influence people, it's been ~20 years since the turn
of the millenium and that's enough for 1 generation to weigh in and the others
to accept change because "oh new era, obviously, new number!" — this plays at
the subconscious level of very crude processing, me thinks.

Nice tangent, hopefully not too hyperbolic. ;-)

------
anentropic
I remember reading this before, in a comment somewhere here on HN

