
The cortex is a neural network of neural networks - curtis
https://medium.com/the-spike/your-cortex-contains-17-billion-computers-9034e42d34f2
======
skywhopper
This is a good article for deepening the complexity model of the brain, but
mere physical structure and behavior of subcomponents of neurons is only a
tiny piece of the puzzle. To understand the workings of the brain you will
also need to add in the complex interactions of hormones and other chemical
agents in the brain and throughout the nervous system and the feedback loops
they establish with the other systems of the body and through them, with the
external world.

Neural networks of neural networks doesn’t begin to describe it. We haven’t
even scratched the tip of the tip of the iceberg in understanding this stuff.

~~~
ltbarcly3
I appreciate what you are saying, but it may be that it's premature to try to
list things necessary to understand something when it's not understood by
anyone.

In fact, I think you're probably dead wrong. Certainly hormone interactions
and feedback loops in the body have some consequence on an organism, but these
things take time. Lets say that some feedback loop is extremely fast, say 10
seconds. It seems reasonable to assume that you can't have a complex feedback
loop with the body much faster than that, just because it takes time for
chemicals to physically move. Cognition is much, much faster than that. There
are many situations where you can learn, and then apply that learning in less
than a second. Now if we were instead talking about what it takes to build a
complex organism that can satisfy it's own needs, then certainly these complex
hormonal feedback loops are essential. You can't have an organism survive when
it doesn't look for food when it's body is out of fuel or keeps eating
poisonous berries because it's body can't send the reasonable feedback to it's
brain, and you can forget about dropping everything and competing for mates at
the correct time without some kind of behavior modifying hormonal signals!

I would say that your assertion is not significantly different from saying
"you can't understand the workings of the brain without considering the
complex interactions of the brain with the lungs, since without oxygen the
brain can't work". So while I agree that the brain exists as part of the body,
I don't see why you would automatically assume that the hormonal feedback
loops with the body are at all necessary for cognition rather than a way to
tell the brain when it's necessary to find food, when it's adaptive to
conserve energy and be lethargic, or what have you.

My point is that we may or may not find that there is some critical ingredient
of cognition hiding in this place or that, and you can't very well tell us
where that will be if nobody knows if it's there.

~~~
feanaro
Well neurotransmitters are a kind of hormone and they certainly play a direct
role in the computational processes of neurons.

~~~
seandhi
Neurotransmitters are not hormones. Some neurotransmitters are also hormones.

~~~
feanaro
Yes, you're right. That was phrased a bit haphazardly.

My point was that we can't just determine the typical time constant for a
hormone's effect and run home with it since there _are_ chemicals which have
significantly lower time constants and some of them act as endocrine hormones
as well.

Furthermore, an immediate feedback loop isn't the only possible way the body
could play a role in cognition, though I suppose the OP is aware of this and
was talking about moment-by-moment interaction with the body on purpose.

------
buboard
So, a neural network of a neural network is just a deeper neural network. The
big question in dendritic processing is whether it is used (conflicting
information about that, e.g. Jia&Konnerth's work), whether it represents
anything, and how it is learned. Plasticity is all over the place in neurons
and takes place also at the dendritic level with cooperation & competition
between synapses, temporal dynamics and neuromodulation. The credit assignment
problem is hard to solve at the circuit/population level, but dendrites offer
an intriguing alternative, as it is possible for them to bidirectionally
communicate with the spike initiation site.

~~~
svantana
Indeed, the Network-in-Network architecture [1] was a compelling idea to get
complex activations, until it was realised that it's just a standard neural
network which is not fully connected. Since neural networks are universal
approximators, it's a bit silly to talk about something else being more
powerful, it's all about the prior, bias, and training, which are all subject
to the No Free Lunch theorem.

[1] [https://arxiv.org/abs/1312.4400](https://arxiv.org/abs/1312.4400)

~~~
empath75
You can also compose functions together but there’s a reason that programmers
don’t generally jam everything into a single function or think about programs
that way.

~~~
svantana
Right, which is one of the main critiques against deep learning, there is no
separation of concerns or encapsulation, just a single function matching input
to output. But at the end of most days, performance is what matters.
Similarly, the brain hardly has a "clean" structure, it's seemingly spaghetti
code even though there is some structure to it.

------
est31
The model of the technical neuron is only inspired by the biological model,
but not meant as approximation. Instead the goal is to obtain good results in
actual applications.

~~~
arcanus
Yes, and the sooner we dispel with this absurd notion that we have any
evidence we are closely modeling the human brain, the better.

~~~
armada651
Sure, but it's helpful to compare our models of artificial intelligence with
biological intelligence to see if there's anything to be learned.

We learned how to make airplane wings from the shape of a bird's wing. Of
course we should not model our artificial wings so closely as to make a plane
with wings that flap. But there was still plenty of stuff to learn by asking
the question "why does a bird fly and my contraption doesn't?"

~~~
FakeComments
As an explicit example, winglets on airplanes were conceptualized from
watching the way bird wings flap, observing the curl on the outer edge of the
wing, discovering that it controls vortex formation, and then applying the
same concepts to fixed wings.

That kind of thing happens all the time in aerodynamics, fluid dynamics,
mechanics, etc — precisely because evolution is a pretty good optimization
function, and so “natural” solutions can often be very close to optimal, but
using hard-to-discover quirks of physics.

~~~
trentlott
One of my favored arguments for maintaining as much biodiversity as we
possibly can.

Each species' death is millions of years of labwork trashed

~~~
Radim
Labwork with as loosely defined goals as life ("reproduce", "accelerate rise
of entropy") is costly. It's a robust, deep objective long-term, but extremely
inarticulate with poor ROI short-term.

A species of spider nailing down how to live on a particular type of rock on a
particular island, in a very particular environment over millions of years, is
simply not articulate enough "lab work".

Which is of course _not_ an argument for killing off species. But it's an
argument against approaching that _moral_ question from such utilitarian
perspective. You might easily end up with results you don't like, once you do
the cost/benefit analysis in a less hand-wavy manner.

~~~
_Schizotypy
I think the fact that it IS so inarticulate, but that the work has already
been put in, is one of the best reasons to protect biodiversity

~~~
Radim
The "sunk cost" fallacy :-)

~~~
_Schizotypy
yep life was wasteful, time to wipe it out and start over amirite?

------
novaRom
Off topic, but it is really uncomfortable to read anything on medium because
25% of my screen is covered by header containing 'Sign In' and 'Get Started'
and by footer with 'Get Updates' button.

~~~
paol
In addition to the solutions in the sibling comments, let me offer my favorite
one: a simple bookmarklet that zaps all sticky elements on the page. Works
wonders in Medium and a lot of other sites.

    
    
        javascript:(function()%7B(function%20()%20%7Bvar%20i%2C%20elements%20%3D%20document.querySelectorAll('body%20*')%3Bfor%20(i%20%3D%200%3B%20i%20%3C%20elements.length%3B%20i%2B%2B)%20%7Bif%20(getComputedStyle(elements%5Bi%5D).position%20%3D%3D%3D%20'fixed')%20%7Belements%5Bi%5D.parentNode.removeChild(elements%5Bi%5D)%3B%7D%7D%7D)()%7D)()

~~~
3PS
Thank you, this works like a charm! I tried it on tapas.io where it got rid of
both the annoyingly thick header as well as the sidebar.

------
dschuetz
"... Our analogies often look to artificial neural networks: for neural
networks compute, and they are made of up neuron-like things; and so,
therefore, should brains compute. But if we think the brain is a computer,
..." OK, enough of this. Neurons are not computers. There is nothing what can
be compared to actual neurons. "Artificial" neurons are just reduced models of
the real ones, so that only the "compute" parts are used to calculate input
vectors. It's only a fraction of what the real neurons actually do.

While I appreciate the article trying to actually understand what's really
going on in neural networks, let's not make unnecessary dumbed-down
assumptions. At least the subtitle of the article is actually correct. The
main title is sensationalist "...17 billion computers!!".

~~~
nzjrs
It's funny, I thought the opposite. I was happy to read an explanation by an
eminent and respected systems neuroscientist on hierarchy of computation,
rather than the musing of an undergrad computer scientist on their first
encounter with nature neuroscience.

~~~
mjburgess
An ANN is just calculus with matrices that does not lend itself, by the
mathematics alone, to any "neuronal" description which is ad-hoc and imposed
from the outside. You can draw many computations in "neuronal" form, eg., a
logistic regression. It's really just a way of diagramming math.

"Computer" is an observer-relative term. There is no physical property of a
system which makes it a computer. A "digital computer" is just a tool made of
silicon which we use to aid computation (a goal we have). There are many tools
(from an abacus to a waterfall) that we can use to aid in computation.

"Computation" isn't anything other than a goal we have. To interpret the brain
as engaged in it carries no information and says nothing explanatory. The
sense in which a brain is a computer is the same sense in which everything is:
a physical system whose state evolution can be used to aid in computation (but
isnt: no one uses brains to compute).

~~~
marmaduke
I don’t think it’s helpful to eliminate teleological accounts categorically.
This writing is clearly for public consumption and benefits from simplying
descriptions of what’s happening, so that the layman reader might have the
impression of having understood something.

Computational neuroscience could be written off by your statement that the
brain is not a computer, but perhaps soften your stance and accept that that
allows for applying tools from computer science to ask questions, just as
physicists do.

~~~
mjburgess
Well my stance is hard for the sake of ruling out computer science as an
explanatory framework for neuroscience.

Computational metaphors arent explanatory, they're illusory sorts of
explanations (like narrative) which "satisfy" without providing a causal model
(ie., a scientific explanation).

I'm not convinced they have been helpful, and mostly end up giving deeply
mistaken impressions about the nature of digital computers -- rather than
helpful impressions about the nature of brains.

~~~
marmaduke
Computational metaphors about the brain usually acknowledge explicitly the
nonlinearity of information transformation, and use the word computation under
the assumption the transformation is doing something useful. This hardly seems
controversial so I find the statements you’re making a bit allergic.

------
davesque
Well, a neutral network is a neutral network of neutral networks.

~~~
jarfil
I think the point of the article is that brain neurons are not equivalent to
the ML representation of neurons, so that count if 17 billion in the cortex
would actually require many more "ML neurons" to be simulated.

It also explains how we can keep adding complexity even if no new neurons are
being created, since the branches themselves act like extra neurons.

~~~
taneq
Wouldn't that just mean the brain's more like a small-world network, with
small highly connected 'blobs' tied together in a sparser, larger network?

~~~
shagie
That is one theory. Multiple "agents" is something that has been proposed. I
don't have the background to read beyond the abstract of
[https://eml.berkeley.edu/~webfac/malmendier/e218_sp06/Carril...](https://eml.berkeley.edu/~webfac/malmendier/e218_sp06/Carrillo.pdf)

> We model the brain as a multi-agent organization. Based on recent
> neuroscience evidence, we assume that different systems of the brain have
> different time-horizons and different access to information. Introducing
> asymmetric information as a restriction on optimal choices generates
> endogenous constraints in decision-making.

There's also Society of the Mind (
[https://en.wikipedia.org/wiki/Society_of_Mind](https://en.wikipedia.org/wiki/Society_of_Mind)
) by Marvin Minsky (which _is_ very readable)

> A core tenet of Minsky's philosophy is that "minds are what brains do". The
> society of mind theory views the human mind and any other naturally evolved
> cognitive systems as a vast society of individually simple processes known
> as agents. These processes are the fundamental thinking entities from which
> minds are built, and together produce the many abilities we attribute to
> minds. The great power in viewing a mind as a society of agents, as opposed
> to the consequence of some basic principle or some simple formal system, is
> that different agents can be based on different types of processes with
> different purposes, ways of representing knowledge, and methods for
> producing results.

------
orbifold
Another thing that is not well known about the brain (among non specialists)
is that there are roughly one order of magnitude more glia cells than neurons
in the brain, which while non-spiking definitely also respond to synaptic
activity and could be involved in computation.

~~~
_Schizotypy
This cannot be overstated. Glial cells do seem to communicate with traditional
neurons

------
iandanforth
I take issue with the "dendrites know more than neurons" bit. The fact that
they respond to almost all inputs suggests they are performing a different
function that a somatic spike. My preferred explanation for that is that _any_
type of input can be predictive of a somatic spike and that has to be
transduced somewhere.

Specific patterns of concurrent input on a dendrite drive sub-threshold
depolarization which is theorized to be key for sequence prediction.

------
ianai
Is this the first time the concept of neural networks of neural networks has
been proposed? I think it’s close to an idea I’d been knocking around in my
head but never studied NNs deep enough to encounter.

I wouldn’t be shocked if consciousness were composed of hundreds or even
thousands of NNs. Or even a tree thousands of levels deep.

~~~
dredmorbius
Marvin Minsky's _Society of Mind_ was published in 1986.

 _A core tenet of Minsky 's philosophy is that "minds are what brains do". The
society of mind theory views the human mind and any other naturally evolved
cognitive systems as a vast society of individually simple processes known as
agents. These processes are the fundamental thinking entities from which minds
are built, and together produce the many abilities we attribute to minds. The
great power in viewing a mind as a society of agents, as opposed to the
consequence of some basic principle or some simple formal system, is that
different agents can be based on different types of processes with different
purposes, ways of representing knowledge, and methods for producing results._

[https://en.wikipedia.org/wiki/Society_of_Mind](https://en.wikipedia.org/wiki/Society_of_Mind)

~~~
guskel
As far as I know, nobody takes the "Society of Mind" theory seriously anymore
and none of it lives in present day AI work. There was never a clear algorithm
that could be constructed from the chapters on K-lines.

------
voidmain
A neural network of neural networks is... a bigger neural network. Having two
or three layers of nonlinearities per "neuron" doesn't do anything
qualitatively different.

There are probably lots of huge differences between NNs and brains but this
article is really making the case that the brain _can_ be modeled as a big NN,
just with a few thousand times more activations than neural cells.

~~~
rusticpenn
Brain can indeed be modelled by NNs, however the neurons in brain are more
complex and requires use of more complex neural models like Hodgkin–Huxley
model or can be approximated by more simplified models like Integrate and Fire
models. ( ref:
[https://www.humanbrainproject.eu/en/](https://www.humanbrainproject.eu/en/) )

~~~
j7ake
What's the evidence that "brain" (whatever that means) can be modelled by NNs?
What features of the brain can NNs model?

------
sica07
What I don't understand is the part about the supralinear/sublinear
particularity of the dendrite. First, the article explains that: " If enough
inputs are activated in the same small bit of dendrite then the sum of those
simultaneous inputs will be bigger than the sum of each input acting alone
(...) A bit of dendrite is “supralinear”: within a dendrite, 2+2=6." Further
in the article, I find this explanation: "Because dendrites are naturally not
linear: in their normal state they actually sum up inputs to total less than
the individual values. They are sub-linear. For them 2+2 = 3.5". What makes
the difference between a bit of dendrite spiting a sublinear vs. supralinear
"result"? I feel that the difference lays in the 'if enough inputs are
activated' vs. 'in their normal state'. If that's the case, what's the "normal
state"? Could anybody help me understand this part?

------
johnnycab
This article probably serves as an amuse bouche in the fluid world of mapping
or replicating functions of wetware to algorithms and vice-versa; the top
highlight: _17 billion neurons_ , almost sounds like one of those sampled,
haunting soliloquies in prog or psy-trance tracks, which are usually
restricted to snippets from sci-fi movies or taxonomy of the universe e.g.
there are billions and billions of stars..

This blog post via the Human-Centred AI research from The Stanford Institute,
dealing with a similar subject matter, is wide-ranging, incisive and replete
with sources.

[https://hai.stanford.edu/news/intertwined-quest-
understandin...](https://hai.stanford.edu/news/intertwined-quest-
understanding-biological-intelligence-and-creating-artificial-intelligence)

------
anonoholic
So, a deep neural network?

------
casual_slacker
Not sure if I understand, but it seems the dendrites are "grouped" in a way,
and their influence on the output is a function of the group?

Is this functionally equivalent to having a two layer mini-network (that
represents one brain neuron), with one neuron on top, and "child" neurons on
bottom that mimic the grouping behavior? If this is true, then I would suspect
our networks are already doing something like this automatically.

~~~
buboard
yes, the linked papers deal with this 2-layer abstraction of a single neuron.
In reality, the neuron-to-neuron connections however are different from
dendrite-soma coupling and those levels (Dendrite and soma) differ in terms of
their ability to integrate synaptic inputs and undergoing plasticity, so they
re not really equivalent. This is still an active area of research with a lot
of unknowns.

------
kingkawn
The takeaway seems to be more about degrees of complexity than any particular
structural component taking precedence.

------
coinward
So are we going to be able to use any of this for a new deeper deep learning
framework?

~~~
wetpaws
Not really

~~~
fizx
If you handwave enough at this, it looks like capsule networks.

------
kristianov
At which point should we start to call it neural internet?

------
xenadu02
This seems to make intuitive sense. If we ever create a true AI it will
probably be on the order of billions of neural networks connected together.

~~~
visarga
We're already attempting suff at that scale. GTP-2 has 1.5 billion
connections.

[https://openai.com/blog/better-language-
models/](https://openai.com/blog/better-language-models/)

------
xpuente
metabotropic channels/synapses? ... sadly the most frequent are missing there.

------
westurner
Metadata is not just data

------
User23
I've never understood the almost religious devotion many hackers have to the
idea that the brain is a computer. The brain, or more practicably the brain,
body, and a pencil and paper, can slowly simulate a Turing machine without
great difficulty. But a Turing machine can simulate a DFA too and that doesn't
make it one.

This should not be construed as denigrating the wonderful achievements of AI
researchers. Just because what they do is inspired by the brain rather than
isomorphic to the brain doesn't mean it isn't great work.

~~~
urgoroger
In accordance with the Church-Turing thesis, the Turing machine stands to be
capable of doing anything that should be called computation. It follows that
if the brain is capable of simulating of a Turing machine (this is called a
universal Turing machine, by the way), then it too can do any computation. So
then the class of things that both can do are the same, and so it is
reasonable to call them the same thing, in some sense.

~~~
FakeComments
This only shows computers are a subset of what brains can do, not that Turing
machines can do whatever brains can do.

~~~
mr_toad
If we conjecture that any physical process can be simulated by a computation
then it follows that a Turing machine can simulate it.

While we don’t have any proof of this conjecture (as far as I know) neither
have we discovered any exceptions.

This also doesn’t rule out the possibility of non-physical or non-mechanical
elements in the brain (dualism/vitalism) but frankly I don’t even entertain
that notion.

~~~
FakeComments
You’re just begging the question: if you assume your conclusion, any claim
holds.

Which is exactly my point — everyone is completely okay with those
assumptions, without justifying that. I find it suspect.

How about showing physical processes are necessarily Turing computable, that
is, justifying your underlying assumptions, before the straw man implication
that I’m talking about dualism?

The mathematical equivalent of your argument is that because all finite-length
approximations of a number are rational, the number itself must be rational —
but this is untrue, in the general case. And in fact, for almost no numbers
does a finite set of those rational approximations yield a general rule to
predict the full structure of the number.

It’s therefore unclear that our limited scientific models being computable
mean the underlying object they’re approximating is computable. But if we
don’t know reality is computable, then we don’t know it can be simulated on a
Turing machine.

Just assuming an answer doesn’t help us resolve the claim.

