
What intelligent machines need to incorporate from the neocortex - dendisuhubdy
http://spectrum.ieee.org/computing/software/what-intelligent-machines-need-to-learn-from-the-neocortex
======
return0
Recently Matt Taylor from numenta had a reddit AMA
([https://www.reddit.com/r/artificial/comments/6beeqj/5182017_...](https://www.reddit.com/r/artificial/comments/6beeqj/5182017_1200_pm_pst_iama_with_matt_taylor_numenta/)).
I was disappointed to find out that numenta does not really have
collaborations with neuroscientists to test or adapt their theories. Their
theories in general are not well known to computational neuroscientists
either. In that sense, i m not even sure about the authority of numenta on
neocortical theories.

For example, the article mentions the ability of clustered synapses to act
independently, but , on the one hand, it has been shown independent dendrites
can be approximated as an extra neural network layer (so they ARE covered by
today's ANN approximation) , and OTOH there s a number of papers showing that
synaptic clustering does not exist in sensory areas. And learning by rewiring
is basically the introduction of random connections which persist only if
their weight increases enough (roughly corresponds the continuous formation of
filopodia and the fact that large spines persist longer).

Machine learning at the moment is an empirical science that has made great
strides without consulting neuroscience for it. I think that has been a good
thing: without having to bend towars some biological plausibility researchers
have been more exploratory and creative, which has led to the creation of an
empirical body of knowledge from which neuroscience could benefit in the
future. OTOH, having watched the field of computational neuroscience there has
not been a lot of progress since , basically the 80s. So i believe it would be
best to leave each of the two fields go their own way.

~~~
trextrex
> OTOH, having watched the field of computational neuroscience there has not
> been a lot of progress since , basically the 80s. So i believe it would be
> best to leave each of the two fields go their own way.

I wouldn't really say that. There's a lot of progress being made in using and
understanding biological processes for useful tasks. Not surprisingly, the
biological mechanisms are surprisingly complex and rich and vary a lot through
the brain (like you mentioned with the dendrites. Dendrites also work like
coincidence detection mechanisms sometimes in some layers of the cortex [1]
for instance)

[2] gives a very nice overview of what is needed from both machine learning
and computational neuroscience to solve the entire problem of understanding
the human brain.

There was the liquid state computing paper [3] in 2002 which showed that
random networks of spiking neurons can perform some computations and have
memory even in the absence of special learning rules.

There has also been quite some work on understanding e.g. the role of
assemblies of neurons (groups of neurons firing), assembly sequences,
plasticity rules, theory formulating neural activity in a probabilistic
manner, a better understanding of the role of inhibition, dendrites etc.

And lastly, there have been huge advances in neuromorphic computing, both
digital and analog. In some cases, the performance on these chips (that use
spiking neurons) approaches that of state of the art machine learning. e.g.
[4]

[1]
[http://www.sciencedirect.com/science/article/pii/S0166223612...](http://www.sciencedirect.com/science/article/pii/S0166223612002032)

[2]
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021692/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021692/)

[3]
[http://www.mitpressjournals.org/doi/abs/10.1162/089976602760...](http://www.mitpressjournals.org/doi/abs/10.1162/089976602760407955)

[4]
[http://www.pnas.org/content/113/41/11441.abstract](http://www.pnas.org/content/113/41/11441.abstract)

~~~
Seanny123
Nice references! I have some criticism of "Toward an Integration of Deep
Learning and Neuroscience" which I've talked to the authors about. I felt like
they downplayed the importance of integration between neuroscience and machine
learning [1], while over-playing the biological plausibility of Deep Learning
[2]. I'm also uncomfortable with the characterization of Neuromorphic
computing as "Deep Learning on a Chip", as this dismisses the possibility of
online learning and neural dynamic systems [3], which I think are necessary
for truly intelligent systems.

As for Liquid State Machines, those evolved into Reservoir Computing and Echo
State Machines. If you want to read more about them, I would recommend this
paper comparing them to the Neural Engineering Framework [4] (with code!) to
get a good idea of the state of the field.

[1] [https://medium.com/@seanaubin/deep-learning-is-almost-the-
br...](https://medium.com/@seanaubin/deep-learning-is-almost-the-
brain-3aaecd924f3d)

[2]
[https://cogsci.stackexchange.com/q/16269/4397](https://cogsci.stackexchange.com/q/16269/4397)

[3] [https://medium.com/@seanaubin/a-way-around-the-coming-
perfor...](https://medium.com/@seanaubin/a-way-around-the-coming-performance-
walls-neuromorphic-hardware-with-spiking-neurons-facd4291b201)

[4]
[https://github.com/arvoelke/delay2017/blob/master/delay2017....](https://github.com/arvoelke/delay2017/blob/master/delay2017.pdf)

------
computerex
For those interested, Jeff Hawkins wrote a book called On Intelligence and he
is largely the guy behind:
[https://en.wikipedia.org/wiki/Hierarchical_temporal_memory](https://en.wikipedia.org/wiki/Hierarchical_temporal_memory)

Deep learning proponents focus on solving specific problems using mathematical
models inspired by biology, whereas HTM proponents argue that biology is
important and it should play a more central role. Deep learning folks are more
geared towards applied AI, whereas HTM's are way more ambitious and are trying
to solve the problem of intelligence/AGI.

Numenta, the company behind HTM's has a platform called NuPic for "intelligent
computing":
[https://github.com/numenta/nupic](https://github.com/numenta/nupic)

But as far as I know HTM's have never been applied to anything non-trivial
successfully.

~~~
paladin314159
I recently read On Intelligence, very interesting take on the core principles
of the brain. Perhaps someone more knowledgeable can comment on this, but it
feels like deep recurrent networks do a pretty good job of capturing the
concepts that Jeff Hawkins considers really important: hierarchy/invariance
and sequences/prediction. Plus they're really effective in practice!

~~~
computerex
The concept Hawkins considers really important above all else is the biology.
He rejects deep learning as being a fundamentally different form of
computation than the one the brain performs. There are are no parallels to be
drawn between RNN and HTM's, really. Apples and oranges. HTM's goal is
literally to try and emulate the neocortex.

~~~
etiam
"Emulate the neocortex" is at best a metaphoric shorthand that will be
misunderstood by many, and quite likely the wrong way of thinking about it. It
is certainly not to be taken literally.

There are plenty of things going on in a real neocortex that are not essential
to intelligence and Numenta was working hard from the start to pick out the
important ones at an appropriate level of modeling. This distinction becomes
especially important in contrast to things like Henry Markram's projects,
where people seem to be happily spending major-multi-million-euro sums of
public funds to run poorly understood simulations with insufficient detail on
a really big computation cluster.

RNN:s are crude compared to the HTM as well as to the more recent CLA model,
but they do share for instance the formulation as a dynamical system with
recurrence. Some of publications coming from people at Numenta have made
direct comparison between the CLA to for instance LSTM-based networks, and in
my opinion they are entirely reasonable to do that. They are all suited to
sequence modelling.

Having said that, I agree the comparison to RNN:s isn't to be made without
understanding some of what goes into the HTM or CLA, and I agree that Hawkins
emphasizes biological intelligence.

------
amelius
And perhaps what the neocortex needs to incorporate from intelligent machines:
backpropagation?

I mean, so far we have no biological evidence of backpropagation, and it seems
pretty useful.

------
bobosha
I have followed Hawkin's theory from its inception ca. 2005, since then they
have been through multiple evolutions: from hierarchical bayesian models (with
then collaborator Dileep George) and now an Sparse representation with
Subutai. However, they have struggled with a "killer app". Barring some toy
examples, very little by way of real-world use-cases.

------
shahbaby
Strongly believe that if anyone is going to solve human level intelligence in
our lifetime, it's going to be Numenta.

'But we can't even figure out worms!' \- Worms are made up of neurons but they
perform a different function from what the neocortex does so studying them is
not like studying a simpler problem, it's studying an entirely different one.

'But ML can do X better!' \- Unlike industry or academia, Numenta's primary
goal is to figure out how the neocortex works, it's not about profit or
publications.

'Biology contains details we don't need!' \- Numenta's approach is not
biologically inspired, it's more like biologically constrained. They avoid
implementations that are _functionally_ different from how the neocortex
works.

I would highly suggest reading On Intelligence to learn more.

~~~
foldr
>Worms are made up of neurons but they perform a different function from what
the neocortex does so studying them is not like studying a simpler problem,
it's studying an entirely different one.

Huh? Take your pick of creatures that display highly intelligent behaviour and
have less complex nervous systems than humans. Insect navigation would be a
good example. If we can't figure out the biological basis of that -- when in
many cases we already have a good idea of what calculations are being
performed -- then our chance of "solving human level intelligence" is zero.

~~~
shahbaby
The problem is in defining intelligence by behavior. What we see as
intelligence in insects most likely originates from an entirely different
process.

Jeff talks about this exact issue in this speech he gave, 10 years go. He
mentions it around the 11 min mark but the whole video is a great intro to
what this is all about.
[https://www.youtube.com/watch?v=G6CVj5IQkzk](https://www.youtube.com/watch?v=G6CVj5IQkzk)

~~~
foldr
> What we see as intelligence in insects most likely originates from an
> entirely different process.

That is pure speculation.

As far as I can tell, what you're suggesting is that it might possibly be
easier to figure out how human intelligence works than it would be to figure
out how an insect performs a few calculations, in cases where we know almost
exactly which calculations are being performed (see e.g.
[http://science.sciencemag.org/content/312/5782/1965](http://science.sciencemag.org/content/312/5782/1965)).

I don't see any reason to think that this is true other than wild optimism.

------
danharaj
How would one go about self-studying neuroscience to the level of say, a
second year grad student not yet specializing in anything? That is to say,
familiar with the basic concepts and able to make sense of current research
given enough time.

~~~
bra-ket
[https://www.amazon.com/Principles-Neural-Science-Fifth-
Kande...](https://www.amazon.com/Principles-Neural-Science-Fifth-
Kandel/dp/0071390111)

~~~
return0
That and Larry Squire's Fundamental [https://www.amazon.com/Fundamental-
Neuroscience-Fourth-Squir...](https://www.amazon.com/Fundamental-Neuroscience-
Fourth-Squire/dp/0123858704) are the bibles. Don't be intimidated by the size
of the books they are actually easy to read.

------
SAI_Peregrinus
"Machines won’t become intelligent unless they incorporate certain features of
the human brain."

How arrogant! The features they list aren't unique to humans, or even to
mammals. They're all present in the nidopallium of birds as well. Machines
won't become intelligent unless they incorporate certain features of
intelligent life.

------
meri_dian
>"Intelligent machines don’t have to model all the complexity of biological
neurons, but the capabilities enabled by dendrites and learning by rewiring
are essential. These capabilities will need to be in future AI systems."

This is the fundamental challenge that we face. Our ability to build AI that
can emulate human thought is not limited by a poor understanding of the brain.
Our current theoretical understanding of how human cognition arises from
neural processes is probably close to a level sufficient to build human level
AI. What limits our progress is the staggering computational demand of
simulating a massive network of highly dynamic units.

Shortcuts, simplifications and clever algorithms will only get us so far. At
this point, processing power rather than theoretical understanding is the
limiting factor.

~~~
nabla9
>Our current theoretical understanding of how human cognition arises from
neural processes is probably close to a level sufficient to build human level
AI.

This is not true at all.

The brain of C. elegans (roundworm) has been mapped exactly (connectome is
known) and we know it's 302 neurons and 8000 synapses well (it has just 959
cells total) but we still can't fully understand how its primitive brain
works. It don't even have spiking neurons and it's still a mystery. It would
be relatively straight forward to simulate. There is even software for doing
it.

To fully understand how even simple brain works, we must understand gene
expression inside brain cells, role of somatic brain mosaicism inside brain,
brain chemistry, neural connectome, how cortical columns work, neural coding,
etc.

When you get all this right, you need to fine tune it. Hyperparameter
optimization for human level AI so that it's not epileptic, autistic,
schizophrenic, manic, idiot or AI equivalent of these and million other things
must be really hard process.

\---

[http://www.openworm.org/](http://www.openworm.org/)

[http://bluebrain.epfl.ch/page-52741-en.html](http://bluebrain.epfl.ch/page-52741-en.html)

[http://cajalbbp.cesvima.upm.es/](http://cajalbbp.cesvima.upm.es/)

~~~
grondilu
Thanks for reminding us of openworm. The fact that scientists are still
working on simulating an animal with barely a thousand cells surely
relativizes claims about simulating a mammalian brain, let alone a human one.

Frankly, it's even kind of discouraging.

~~~
Houshalter
Meh, evolution is known to write utterly incomprehensible spaghetti code. It's
really not surprising that reverse engineering it is so hard.

Write a very simple genetic algorithm to evolve artificial neural networks or
computer code. Often it can solve simple tasks in creative ways. But trying to
understand the output is usually a nightmare. It usually comes up with
ridiculously convoluted and insane ways of doing things. E.g;
[https://www.damninteresting.com/on-the-origin-of-
circuits/](https://www.damninteresting.com/on-the-origin-of-circuits/)

>The plucky chip was utilizing only thirty-seven of its one hundred logic
gates, and most of them were arranged in a curious collection of feedback
loops. Five individual logic cells were functionally disconnected from the
rest— with no pathways that would allow them to influence the output— yet when
the researcher disabled any one of them the chip lost its ability to
discriminate the tones. Furthermore, the final program did not work reliably
when it was loaded onto other FPGAs of the same type.

Now imagine a system many times larger and more sophisticated than that, that
evolution has been hacking on for hundreds of millions of years.

Doesn't mean we aren't close to human level AI though. We didn't completely
reverse engineer birds before building airplanes, or horses before building
cars.

------
andbberger
numenta makes my kook sense tingle... but Jeff Hawkins did found the Redwood
Center for Theoretical Neuroscience (now part of HWNI), and for that I am
grateful.

~~~
taneq
Yeah, the ratio of bold statements to demonstrated effectiveness seems a
little high. And after all their talk about embodied sensorimotor whatnots,
HTM doesn't seem that good for, well, actually _doing_ things, compared with
mainstream deep learning (eg. AI Gym stuff.)

Also I don't get why their method cares about sampling rate. It just seems
weird.

> A data sampling rate from once per minute to once per hour, with the “sweet
> spot” being between once per minute and once every five minutes (faster
> velocity data can be aggregated or sampled as well)

------
mr_overalls
"The question of whether Machines Can Think... is about as relevant as the
question of whether Submarines Can Swim." \--Dijkstra (1984) The threats to
computing science (EWD898).

------
theprop
Some professor asks students on the first day of neuroscience, if all there is
to know about the brain is a mile long, how far have we gotten so far?

Students guess a few yards, a hundred feet, ten feet...the professor says no
no, not even 3 inches.

------
gene-h
>Our discovery is that every region of the neocortex learns 3D models of
objects much like a CAD program. Citation needed. If this was the case,
certainly we'd have some experiments that show this? Especially since every
part of the neocortex is supposed to be doing it.

Now if we can make artificial neural networks that work with 3D data, learning
such things as 3D data to value, 3D data to 3D data mappings that would be
damn useful. IE, estimating how much it would cost to make something from a
CAD model or how aerodynamic a thing is without running costly CFD.

I'd also argue that we don't need 'truly intelligent machines' to "build
structures, mine resources, and independently solve complex problems"

Ants and termites are capable of doing similar tasks and I'm doubtful the
author considers them 'truly intelligent'.

>it should be possible to design intelligent machines that sense and act at
the molecular scale. >These machines would think about protein folding and
gene expression in the same way you and I think about computers and staplers.
>They could think and act a million times as fast as a human. So if the author
means in simulated environments, we are quite slow at simulating molecules.
For molecular simulation, we need something like femtosecond(10^-15 s) time
steps whereas each time step is on the order of milliseconds. We are trillions
of times slower than realtime. This is for completely classical systems, if we
take into account quantum effects, it's much longer. Oh and our simulation
methods for such things are terrible. Intelligence would help here, but it's
not going to be millions of times faster than a human.

Now if they mean videoing what's happening with a microscope and learning from
that, well the problem is we don't have a perfect microscope for seeing such
things at the nanoscale. So in order to figure out what's going on we have to
get creative and make tests for each thing we're trying to analyze.

Now if the author means nanorobots inside cells doing learning and what not,
just having such machines would be useful in and of itself. Heck if we could
make such things, we wouldn't need to worry about problems such as gene
expression or protein folding because we'd be able to make our own damn
proteins or our own damn cells for that matter. Even with drexlerian tech
doing this sort of machine learning at this scale is pretty ridiculous.
Current nanobot designs require something on the order of kilobytes of
memory[1]. In addition, gene expression, protein synthesis and folding are
slow processes. Average protein synthesis time for eukaryotes is 2
minutes[1](eons as far as simulating these things is concerned!). So getting
this data can't happen much faster than a human can think.

[0][http://people.umass.edu/bioch623/623/Second.Section/7.%20CoT...](http://people.umass.edu/bioch623/623/Second.Section/7.%20CoT.Web.2.08.pdf)
[1][http://www.rfreitas.com/Nano/Microbivores.htm](http://www.rfreitas.com/Nano/Microbivores.htm)

