
Why Does the Neocortex Have Columns? A Theory of Learning Structure of the World [pdf] - blacksmythe
https://www.biorxiv.org/content/biorxiv/early/2017/09/28/162263.full.pdf
======
Cacti
Jeff Hawkins gets shit on a lot in ML because his theories haven’t produced
models with results as good as the mainstream approach, but I’m glad they keep
working at it and keep coming up with interesting ideas.

Too much of ML these days is about some NN model that does 0.n% better than
SOTA on some specific task. Then you change one tiny parameter and he entire
thing breaks, and it turns out we didn’t understand why it was working at all.

~~~
eli_gottlieb
>Jeff Hawkins gets shit on a lot in ML because his theories haven’t produced
models with results as good as the mainstream approach, but I’m glad they keep
working at it and keep coming up with interesting ideas.

And also because Numenta's work isn't good, empirically-checkable neuroscience
either.

~~~
ewjordan
Not sure whether that was a jab at Numenta or not, but I do think the
combination of these two comments cuts to the heart of the PR problem for
Numenta: they're _neither_ trying to do 0.1% better on the classic perception
benchmarks than other algorithms do, nor trying to publish accurate, testable
descriptions of the true details of meatspace neuroscience.

To me, exploring alternative network architectures and algorithms seems an
extremely worthwhile goal even if it's only loosely tethered to actual
biology, but from a PR perspective they really need to be better about priming
the conversation if they want people to care.

Bad (neuroscience-focused): "We're doing a lot of research on neuroscience,
and finding some really interesting stuff, so we built a model that doesn't
exactly match the way the brain works but is still interesting. No, we haven't
tried to make it work to classify ImageNet test cases, that's not our goal.
But look, it's closer to biology, and we have working code that we're playing
with!"

Better (ML-focused): "We're developing a novel neural network architecture
that performs online unsupervised learning using only local update rules.
Though it performs competently at classic benchmarks X, Y, and Z when a small
WTA layer is thrown on top, it can also tackle problems A, B, and C that
classical deep learning networks can't make any progress on."

To be fair, I'm not even sure if Numenta's networks _could_ perform
competently at any classic benchmarks (I'm guessing that if they could, it
would take some work to get them to do so), and I have no idea what new
problems it could work on. But they really do need to reframe the conversation
and emphasize that sort of innovation if they want to be taken more seriously
- focusing on neuroscience underpinnings is not a great move if they're not
engaged in research that can actually win over neuroscientists, and just
pointing out that they're focusing on those things is not a way to win over
industry ML folks if they don't have any results to point at.

~~~
eli_gottlieb
>To be fair, I'm not even sure if Numenta's networks could perform competently
at any classic benchmarks (I'm guessing that if they could, it would take some
work to get them to do so), and I have no idea what new problems it could work
on. But they really do need to reframe the conversation and emphasize that
sort of innovation if they want to be taken more seriously - focusing on
neuroscience underpinnings is not a great move if they're not engaged in
research that can actually win over neuroscientists, and just pointing out
that they're focusing on those things is not a way to win over industry ML
folks if they don't have any results to point at.

It was definitely a jab, but I've also got some sympathy for their project. I
genuinely agree that, well, theoretical and computational neuroscience need to
become more _genuinely_ computational! We're seeing an emerging computational
paradigm for neuroscience that isn't _just_ about jamming "network
architectures" or "neural circuits" together and hoping something works; it
supposedly has strong mathematical principles.

Ok, so where's the code? Sincere question. Some papers do simulations in
Matlab, R, or Python that's just not shared. This includes even papers that
purport to be applying these neuroscience-derived principles to robotics
problems.

Computational cognitive science does a bunch better: _their_ custom-built
Matlab gets shared!

If we really believe our theories, we should put them to the computational
test. If we put them to the test and they don't work well, we should either
revise the theories, or revise the benchmarks. Maybe ImageNet classification
scores are a _bad idea_ for how to measure precise, accurate sensorimotor
inference! New benchmarks for measuring the performance of "real" cognitive
systems are a _great_ idea! Let's do it!

But that requires that we do the slow work of trying to merge
theoretical/computational neurosci, cognitive science, and ML/AI back
together, at least in some subfields. This is challenging, because nobody's
gonna give us our own journal for it until a few prestigious people advocate
for one.

------
electrograv
Full paper here:
[https://www.biorxiv.org/content/biorxiv/early/2017/09/28/162...](https://www.biorxiv.org/content/biorxiv/early/2017/09/28/162263.full.pdf)

I always enjoy reading analysis/ideas/intuitions about how the brain works,
because it provides inspiration for machine learning improvements that can be
applied in the real world.

That said, I’m still optimistically waiting for Numenta (and Geoff Hinton’s
capsule theory) to set the new bar at one of the many difficult
image/speech/language/etc recognition challenges.

Ideas are great, but at the end of the day science moves forward when we
measure our ideas against reality. (To the credit of this paper, it does make
a series of predictions, though those seem extremely difficult to measure in
biological systems for the time being.)

------
platz
Skeptical. Previously the brain was a machine consisting of drives and
pulleys, just like the state of technology of the day. Now the brain is a
computer that runs deep learning models. It would be one thing to just say the
cortex has columns. It's another to go on and model what those columns are
doing with linear algebra. The coincidence that this picture emerges at the
same time as current fads in tech is too great to ignore

I am more interested in the work that is identifying different kinds of cells
e.g place cells

~~~
freeflight
> The coincidence that this picture emerges at the same time as current fads
> in tech is too great to ignore

It's no coincidence, I see it as a kind of reframing the issue from a
different perspective/approach and for that, we use whatever seems the best
framework of explanation we have at hand during that time. Gotta start
somewhere, can't try to paint a picture without a frame and at least some
colors.

One can easily see the evolution how the frameworks we use keep getting more
complex, from drives and pulleys to computers and NNs, so there clearly is
some progress in how we are trying to describe the workings of the
brain/consciousness.

In the end, it's also about what's the actual goal here: Trying to understand
the human brain/consciousness or trying to artificially create "consciousness"
or rather something resembling it. The later doesn't necessarily require the
former.

~~~
rhyolight_
Not consciousness, but intelligence. Big difference.

~~~
analogic
Is it? Personal theory is consciousness scales, like we're more conscious than
a bug, but a bug still definitely conscious.

Then like A.I. overlords / borg hive mind more conscious still.

~~~
namlem
How far down does the scale go? Are brainless animals like oysters conscious?
What about the mycelium network underneath a forest?

~~~
analogic
Well maybe better way to state, less 'conscious' and more 'things they're
conscious of.'

------
brad0
Great to see numenta in the news again. On Intelligence was the book that got
me excited about biologically based machine learning. The methods that in that
book is very different to anything else I’ve seen in current “trendy” ML.

------
rhyolight_
Here are some video resources to help explain this theory:

\-
[https://www.youtube.com/watch?v=BvJJn9VS4rk](https://www.youtube.com/watch?v=BvJJn9VS4rk)
\-
[https://www.youtube.com/watch?v=-h-cz7yY-G8](https://www.youtube.com/watch?v=-h-cz7yY-G8)

~~~
pault
Here's another very interesting talk from Jeff Hawkins about modelling the
neocortex:
[https://www.youtube.com/watch?v=izO2_mCvFaw](https://www.youtube.com/watch?v=izO2_mCvFaw)

------
Upvoter33
The world needs more work being done in the way Hawkins and co. are doing it,
and less in the mold of most deep learning/ML work. Why? He's actually trying
to connect building intelligent machines with biology. This is a huge problem,
but so few are working on it. Rather, we are all distracted by deep learning
because of its recent successes in very specific problem areas. In a few
years, when we run into its limits, Hawkins and people doing work like this
will have a chance to shine (if they produce something that works, of course).

~~~
bbctol
Deepmind does a ton of work in meat neuroscience, and are actually publishing
useful research in the field (a recent if somewhat controversial review at
[http://www.cell.com/neuron/fulltext/S0896-6273(17)30509-3](http://www.cell.com/neuron/fulltext/S0896-6273\(17\)30509-3))
And they're about the most prominent deep learning group I can think of.

------
skybrian
There is also a summary at:
[https://blog.acolyer.org](https://blog.acolyer.org)

~~~
mikhailfranco
... probably not by pure coincidence.

p.s. specific dated link [https://blog.acolyer.org/2017/10/25/why-does-the-
neocortex-h...](https://blog.acolyer.org/2017/10/25/why-does-the-neocortex-
have-columns-a-theory-of-learning-the-structure-of-the-world/)

------
Double_Cast
> Error! Problem, or Page Not Found

> Sorry, the page you were looking for does not exist.

Link is broken. From browsing, I believe the correct link is

[https://numenta.com/papers-videos-and-
more/resources/layers-...](https://numenta.com/papers-videos-and-
more/resources/layers-and-columns/)

~~~
Avery3R
The link works fine if you disable javascript. Still weird though

------
nopacience
Understand how the brain works means understand how we the humans see and
understand the world and ourselfs within it. The brain organically and
selectivelly selects what to learn and which information shou ld be retained.
Our brain receives training since birthdays. Then the family implements some
of their own training then school and world. The brain never stops.

Great job Numenta !!

------
mholt
I'm getting "Error! Problem, or Page Not Found Sorry, the page you were
looking for does not exist."

Edit: Weird, I just closed my browser and tried again, and the article looks
like it flashed into the screen then was replaced by the error message.
Happens repeatedly on Chrome on Android...

~~~
dang
I'm seeing that too. Ok, we'll change the URL from
[https://numenta.com/papers/why-does-the-neocortex-have-
layer...](https://numenta.com/papers/why-does-the-neocortex-have-layers-and-
columns/) to the pdf of the paper. There's also a summary at
[https://blog.acolyer.org/2017/10/25/why-does-the-
neocortex-h...](https://blog.acolyer.org/2017/10/25/why-does-the-neocortex-
have-columns-a-theory-of-learning-the-structure-of-the-world/) that other
commenters have mentioned.

------
adamnemecek
I'm probably talking out of my ass but I'm somewhat suspect of a cortex having
straight up columns. I'm curious whether these are artifacts of the fact that
linear algebra seems to be the dominant algebra in ML/modeling of human
perception. Recently I've been dipping my feet into geometric algebra which
seems to be the superior algebra for just about anything you can think of
(human perception but also like all of physics, Maxwell's 4 equations are
reduced to a single equation in GA) and it's particularly better for reasoning
about spaces which this seems to be all about.

And unlike linear algebra it actually makes sense (e.g. why is cross product
only in three dimensions?, wtf are determinants esp. in the context of matrix
division all about?).

This blog post introduces GA and talks about it's relationship to human
perception.

[https://slehar.wordpress.com/2014/03/18/clifford-algebra-
a-v...](https://slehar.wordpress.com/2014/03/18/clifford-algebra-a-visual-
introduction/)

~~~
lamename
Cortical columns exist [1]; think perpendicular to the surface of the cortex
[2, 2nd image, or just image search "cortical column"]. This is due to the
morphology of cortical neurons. My understanding is that a column can be
thought of as a functional unit, and passing information across columns adds
complexity.

Of course biology is messy and there's tons of variation depending on which
brain region you're looking at, but Visual Cortex was one of the earliest
places this was observed. It gets complicated and detailed quickly, and I'm
somewhat out of my element here.

[1]
[https://en.wikipedia.org/wiki/Cortical_column](https://en.wikipedia.org/wiki/Cortical_column)
[2] [http://www.mbfbioscience.com/blog/2012/01/neurolucida-
helps-...](http://www.mbfbioscience.com/blog/2012/01/neurolucida-helps-
florida-researchers-reconstruct-a-region-of-the-rat-brain/)

~~~
SubiculumCode
There are, of course lateral connections between columns, but the columns are
very real.

I dont see what geomettic algebra has to with it, as the grandparent suggests.

~~~
adamnemecek
They are performing spatial reasoning and GA is a better algebra for that.

~~~
SubiculumCode
I mean I don't see why geometric algebra bears on the factual existence of
column-like structure perpendicular to the cortical surface in real brains.
Geometric algebra may bear on what those columns do, but it is not relevant to
whether the columns exist. This is a matter of observing network wiring from
brain preparations.

~~~
kortex
I think it has something to do with the power of building hierarchical layers
of abstraction, to make increasingly precise predictions about the world.

