
The Thousand Brains Theory of Intelligence - headalgorithm
https://numenta.com/blog/2019/01/16/the-thousand-brains-theory-of-intelligence/
======
teabee89
I'm very pleased to see Numenta on HN front page, as they're doing incredibly
difficult and ambitious work without deep learning's spotlight. They take a
philosophically very different approach: instead of warming up biologically
implausible neural models from the 60s and hope that with enough data we'll
reach artificial general intelligence (AGI), Numenta founder Jeff Hawkins (way
before Hinton's or Hassabis's recent declarations of DL reaching a deadend)
thinks we shall understand our biological neocortex better and reverse
engineer it because it's the only piece of hardware most scientists agree is
at the source of intelligence. Although planes don't have wings, we had to
understand wing-flapping first to find better ways to fly. If you're
interested, I highly recommend you follow
[https://discourse.numenta.org/](https://discourse.numenta.org/)

~~~
tempestn
> Although planes don't have wings

I take your point that planes don't fly by flapping like birds, but obviously
they do actually have wings.

~~~
wadkar
Depends on what you mean by wings. The word wings here has multiple meanings,
with commonality that they both are involved in flying. However, they differ
in their operation, i.e. one thing flaps the other doesn't.

~~~
Angostura
You'll be claiming that tables don't have legs next.

------
EliasY
This theory reminds me of Marvin Minsky's "society of mind" theory where
multiple agents/parts of a certain brain center encode the same object/concept
in different ways and have a vote/suppress mechanism for processing different
inputs in a heterarchial manner. But his model was far more theoretical than
what I see here.

------
mollerhoj
I don't really understand why this theory is attractive. I hope someone here
can help explain:

Suppose we build a 1000 NNs for object detection, and let them 'vote' for
which model to believe. What would be the best voting mechanism? Probably a
final voting layer, right?

This architecture can be described as a single NN where we disallow
connections between 1000 separate parts of the network in the first n-1
layers.

What would be the advantage of this? I appreciate that the brain - might - do
it, but unless you can show any indication of this with experiments, or argue
for why it would result in a performance increase, what's the point? Anyone
can come up with an inferior architecture and propose that it's what the brain
does.

~~~
MRD85
Is that the best way to describe the architecture if the n-1 layers are fed
with completely different input data?

~~~
mollerhoj
No I guess not. My mistake:

If the input is completely separate, then it makes sense to call it 1000
different NNs.

If the input is not completely separate, then the architure I described with a
high percentage of dropout in the first layer seems like a reasonable
description?

This doesn't address my quarrel with the theory though; Why would that be
either superior performance wise or more biologically realistic?

~~~
jmmcd
I agree that a "voting" mechanism sounds all wrong, not only for the reason
you said.

In the scenario described, where we have some neurons coming ultimately from
the hand and sense of touch, and others coming ultimately from the eye and
sense of vision, and they disagree about what they're perceiving, they're not
voting! First of all, it might be better described as a "negotiation", and in
fairness I think their theory does envisage this (but they do use the term
"vote", albeit in scare quotes).

Second, what this theory misses (as does a lot of recent work which takes
neural networks as a jumping-off point) is the transition from sensory/sub-
cognitive processing up to conscious/cognitive processing. If my hand feels a
cat and my eye sees a coffee cup, then I'll consciously notice the
contradiction and gather more evidence.

~~~
your-nanny
what may be envisioned is a process like that of information integration in
two-choice tasks about which we have several neural models.

------
jonplackett
I know little about AI and nothing about neurology, but this 'feels' right to
me, based on what I think is happening in my own head. I often feel like
there's different bits in there trying to solve the same problem and sometimes
getting different answers.

------
tranchms
This is a good theory. It’s one of the first that resonates with my
understanding of the brain.

The brain is composed of two hemispheres, each with different regional
faculties working together. Each region is responsible for producing different
models, based on its responsibilities, sensory inputs, regulatory functions
etc.

As a whole the brain is a massive neural network. But each region possess its
own neural network (grid cells/columns), and I wouldn’t be surprised if there
were sub-neural networks at play, some that are unique to each person, based
on their neurological development, environmental conditioning, and overall
personal adaptations.

It’s easy to see how these regional neural networks are responsible for
specific statistical modeling, which then factor into other regional models,
and so on, in order to arrive at the correct output response.

Also resonates with Gestalt theory/psychology, which makes a lot of sense from
a complex systems processing perspective, I’m just not sure what current
neuroscience says about it.

I think this theory is very promising for AI.

I look forward to seeing more of their applied work.

------
dalbasal
I have no idea if this is even relevant...

I recall reading about expirements with octopi concluding that individual
organs have different "knowledge."

IE, if one eye learns to recognise an object the other one doesn't gain that
recognition. When one tentacle learns a task, the others don't. This was,
iircc, explained via their more decentralised nervous system.

No spine and autonomous decision making in limbs means that the brain must
only control limbs mostly by sight. The brain doesn't have enough signal to
control limbs by feel.

------
nabla9
They basically describe the core principle behind dropout, boosting etc.

~~~
yorak
Very true, my mind immediately wandered to thinking about Random Forests
[https://en.wikipedia.org/wiki/Random_forest](https://en.wikipedia.org/wiki/Random_forest)

------
YeGoblynQueenne
I close my eyes and extend a single finger towards the general vicinity of my
keyboard. When my single finger touches the keyboard, I know that it's
touching the keyboard, even what part of the keyboard it's touching.

I seem to have built a mental model of the world that doesn't need the senses
to maintain it. I can use a single sensory input (iiish - there's many neurons
on the tip of my finger) to verify it.

That is not explained by the thousand theory of intelligence, or any other
theory of intelligence we currently have (and we don't really have that many).

And, not to put too fine a point on it, a thousand years from now, once (and
if) we exit the dark ages we are very obviously hurtling towards, we will look
back upon these attempts to explain intelligence in the same way that we look
today upon the attempts of alchemists to understand matter. With sympathy and
a little pity.

------
axilmar
...or the brain simply needs more information to identify the cup than the tip
of a single finger provides.

If we touched the cup with our finger, but this time we didn't touch it with
the tip of our finger but with the whole finger, we would vastly increase the
chances of recognizing the object.

This means the brain doesn't hold individual representations of objects per
body part, but it fuses what its sensors say into a single model and then
tests all its sensory input (the input from all the senses) against what is
stored internally.

If we grab something like a cup with our full hand, we would certainly
identify the object as a cup, but it might not be a cup actually.

This means all our senses participate in object identification.

Thus I don't think the theory of thousand AIs is correct.

------
thedevil
This is similar to what I've been thinking. I usually say "the brain is more
like a board than a CEO".

One additional interesting tidbit I've noticed is that when the brain decides
on a belief, it seems to suppress alternative beliefs. For example, with the
old-lady-young-lady picture, it's hard to see them both at the same time.

------
thecupisblue
I support this theory - it's so obvious I'm amazed it isn't a widely accepted
theory already. Numenta has some of the best views on creating AGI after human
intelligence, and their approach could let us create a brain without the
hardware limitations that plague ours.

------
nutellalover
Do they have any data/experiments to support this elegant-sounding theory?

------
joe_the_user
Could the various parts of a neural network be said to have their own models
of the world?

~~~
varjack
I’m thinking about this as similar to having multi-domain independently
trained critics for a GAN.

------
ghthor
I'm dying to find the time to experiment with the nupic codebase.

------
evrydayhustling
In excited about 1k brains and in general biological research to inform AI!
But I don't understand why folks doing serious work on the
biological/artificial boundary push claims about which mechanisms are
"necessary" to produce intelligent machines.

Hasn't our own progress towards understanding biological intelligence been a
series of theories that supplant and reinterpret another? Don't we frequently
find new biological mechanisms for information storage and computation that
run parallel to presumed dominant paradigms in impactful way? Why need a
discovery to be necessary rather than useful or inspirational? I don't think
it's common for hard science communities to describe results in terms of what
is required, as opposed to what's possible.

