
Jeff Hawkins Is Finally Ready to Explain His Brain Research - tysone
https://www.nytimes.com/2018/10/14/technology/jeff-hawkins-brain-research.html
======
mushishi
This seems to be article that gives more information: A Theory of How Columns
in the Neocortex Enable Learning the Structure of the World:
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5661005/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5661005/)

Excerpt: A simple thought experiment may be useful to understand our model.
Imagine you reach your hand into a black box and try to determine what object
is in the box, say a coffee cup. Using only one finger it is unlikely you
could identify the object with a single touch. However, after making one
contact with the cup, you move your finger and touch another location, and
then another. After a few touches, you identify the object as a coffee cup.
Recognizing the cup requires more than just the tactile sensation from the
finger, the brain must also integrate knowledge of how the finger is moving,
and hence where it is relative to the cup. Once you recognize the cup, each
additional movement of the finger generates a prediction of where the finger
will be on the cup after the movement, and what the finger will feel when it
arrives at the new location. This is the first problem we wanted to address,
how a small sensory array (e.g., the tip of a finger) can learn a predictive
model of three dimensional objects by integrating sensation and movement-
derived location information.

~~~
Cixelyn
I think the paper that the NYT article is referencing is actually this one,
published yesterday: A Framework for Intelligence and Cortical Function Based
on Grid Cells in the Neocortex

[https://www.biorxiv.org/content/early/2018/10/13/442418](https://www.biorxiv.org/content/early/2018/10/13/442418)

Jeff Hawkins is scheduled to give a talk at the Human Brain Project Open
Day[1] in Maastricht, NL about this paper.

The one linked in the parent is from last year, related to the mentioned story
of him running his finger across the coffee cup.

[1]: [https://www.hbpopendaysummit-2018.org/programme/keynote-
spea...](https://www.hbpopendaysummit-2018.org/programme/keynote-speakers)

~~~
ilaksh
This paper, which was inspired by Hawkins' work, is interesting to me:

Feynman Machine: A Novel Neural Architecture for Cortical And Machine
Intelligence

[https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/download/...](https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/download/15362/14605&ved=2ahUKEwjgl_Oj5obeAhV7HzQIHdjVDPgQFjABegQIBxAB&usg=AOvVaw2K1D6Saol_npPGbSlI4Alm)

Details in the previous paper:
[https://arxiv.org/pdf/1609.03971](https://arxiv.org/pdf/1609.03971)

------
georgewfraser
The premise of this research, that the cortex is broken up into discontinuous
“columns”, is only partly true. If you record several neurons that are lined
up one on top of another in different layers of the cortex, they will usually
respond to the same stimulus or activity: for example, responding to touch on
the same finger.

But the word “column” implies discontinuous modules. You can actually observe
this in sensory cortex of mice and rats, where each whisker has a 1-1
relationship with a “column” of neurons that primarily respond to that
whisker. This observation is what originally led neuroscientists to propose
the concept of the column as a fundamental unit of organization of the cortex.

But it turned out that lots of other brain regions don’t have this kind of
discontinuous organization. Neuroscientists still use the term “cortical
column” but they’re really just referring to the fact that vertically aligned
neurons seem to be closely connected.

When the Numenta people talk about “cortical columns” they seem to be
describing the original idea of cortical columns that didn’t pan out. It’s
really weird.

~~~
kopo
Doesn't matter. What matters is whether the idea has engineering utility.

Jeff Hawkins is not a Biologist or a Neuroscientist (though he would probably
qualify) trying to explain how the brain works. He is an Engineer using
whatever bits he sees as potentially useful.

~~~
jeremyjh
The entire premise of his research is that understanding how the brain
actually works is key to developing an artificial general intelligence.

------
divan
His book "On intelligence" is already extremely good on explaining the basic
principles of how neocortex works. I still find this book one of the most
influential reads ever, and have been following Numenta work ever since.

It's been a bit frustrating that after 15 years they still didn't make
headlines with something more practical and groundbreaking, so I'm a bit
excited to hear what will be unveiled at this conference tomorrow.

~~~
mrec
Aside from anything else, you've gotta love a book on something as poorly-
understood as "intelligence" that comes with an appendix of "Testable
Predictions".

His theories may be right or wrong, but he's doing science right.

------
stupidcar
It’s weird how this headline and article makes it sound like Hawkins is being
coy about his theories, or preferring to work in secret than explain them. But
On Intelligence was published years ago, and contained a section on testable
predictions. Likewise, Numenta has published papers, and made source code
available.

So it doesn’t seem to me that the problem is that Hawkins has been
uncommunicative, it’s that the wider academic and research community just
hasn’t been interested. Perhaps that’s simply because his theories don’t have
merit, but it’s also hard to shake the feeling that’s it more because his
career, and therefore his theories, hasn’t fit the standard academic mould.

It seems to me that, if you want to be an influential theorist, you’re
expected to study at a prestigious university, work with an older, respected
researcher while getting your PhD, then become a professor, setup a research
lab, mentor some PhD students of your own, then finally maybe spin off your
research into some startups, or take a high-paying research position at a big
company.

Hawkins hasn’t followed this path, so he has no network of peers and former
students to evangelise his work and carry it forward. He’s done research and
published it, like the scientific ideal says you should, but he’s missing the
structure necessary to make it land in the research world.

It’s like a sort-of reverse cargo cult. He’s done the work of constructing a
real airstrip and working planes, but he can’t persuade anybody to fly,
because they’ve never seen or heard of air travel before.

~~~
mathewsanders
It’s been a while since I read ‘On Intelligence’ but I think I remember it
containing a pretty good primer of his ideas, and references to open source
implementation of his predictive algorithm with the invitation to apply it to
specific domains because his team had no chance of exploring all the
applications.

If tweets were searchable, I was really taken reading this book and I think I
made a pretty grand statement that future generations would compare this book
to work like The Origin of Species.

Now I’m interested to dig up my copy and read again :)

~~~
jbenner-radham
I just finished my first read through of that book last night after having had
it in my “backlog” for years. I’m kicking myself for not reading it earlier.
It really was a fantastically intriguing read and definitely recommend it to
anyone interested in the subject matter.

------
nabla9
Their paper from few days ago is here:
[https://www.biorxiv.org/content/biorxiv/early/2018/10/13/442...](https://www.biorxiv.org/content/biorxiv/early/2018/10/13/442418.full.pdf)
Rest of them and the code can be found in
[https://numenta.com/](https://numenta.com/)

The ideas represented and level of detail in their theory is such that the
most likely response from everyone else is just 'Nice theory' and a shrug.

Hawkins and Co. have been working on this for years. Numenta sends frequent
newsletters and try to keep people interested, but there is little concrete
there except some interesting ideas.

------
mirceal
After reading “On intelligence” and being disappointed by people eagerly
associating ML with AI, I think this guy and his team may be onto something.
It may not be much, but I welcome any attempt/perspective that is different
from the “mainstream” approaches.

------
bobosha
I have been following Jeff's work since 2005, only time will tell if his work
will bear fruition. But science will be immensely grateful to Jeff for
igniting the passions of several AI researchers during the deepest and darkest
of AI winters (mid-2000), I was one of them and pursued research at MIT.

~~~
rvense
It was so exhilarating, that promise of maybe knowing ourselves. I remember
watching the lectures on Youtube and getting On Intelligence from the library
at another university department.

I was a student of linguistics when I discovered it, having started a move
from theoretical linguistics to a masters in more computational stuff just as
machine learning was beginning to become more widely used. Hawkins was one of
a few things that combined to make me so completely disenchanted with the
entire field of computational linguistics, especially as it was practiced at
my university, that I just left and became a regular software developer. I
decided I'd rather have to center things in CSS than throw progressively
fancier maths at bag-of-words models and poorly annotated corpora and
pretending it had anything to do with thinking machines.

(I don't regret this move at all since another major factor was I didn't want
to go do a PhD somewhere far away; I do somewhat regret going into
computational stuff in the first place, though - before that I wanted to
become one of those Indiana Jones linguists who go to the mountains and write
grammars of languages only spoken by a handful of 80-year-olds... that would
have been really fun)

------
mrep
This will be exciting. His book "on intelligence" was my favorite book on how
the brain works nuerally and how that relates to computers. Much better than
Ray kurzweils books in my opinion.

~~~
Agebor
I can recommend The Mind is Flat [https://www.amazon.com/Mind-Flat-Illusion-
Mental-Improvised-...](https://www.amazon.com/Mind-Flat-Illusion-Mental-
Improvised-ebook/dp/B077Y95D6V) which is more high level, but still seems
compatible with this theory.

------
teabee89
I’ve been following Numenta’s work since 2010 and today I’m happy to see this
at the top of HN. Hopefully the deep learning community longing for AGI, will
finally take a look at Numenta’s work. I also would like to shout out to Matt
who’s been involved in the open source work as well as the amazing HTMSchool
YouTube videos which I highly recommend for anyone interested to learn about
the underlying concepts:
[https://www.youtube.com/playlist?list=PL3yXMgtrZmDqhsFQzwUC9...](https://www.youtube.com/playlist?list=PL3yXMgtrZmDqhsFQzwUC9V8MeeVOQ7eZ9)

~~~
nupic
Thank you!

------
gfodor
To prove that the brain runs a single learning algorithm would actually be a
bigger breakthrough than going from there to determining the algorithm itself.
What a profound thing it would be to learn that the mechanics of the brain are
fundamentally simple but applied on a massive scale of data, time, and
parallelism. I applaud the effort of trying to uncover this model -- embracing
the idea that The Brain is comprehensible despite A Brain being impenetrably
complex is a liberating concept.

~~~
memory_grep
Surely there has to be a single learning algorithm for most of the skills a
modern human learns. There has been no time to evolve specialized algorithms
for things like chess, programming, riding a bike, etc., and yet we can easily
learn these skills.

~~~
nickpsecurity
"There has been no time to evolve specialized algorithms for things like
chess, programming, riding a bike, etc., and yet we can easily learn these
skills."

Not necessarily. We got a mix of innate head-start and training from
parents/others for a really, really, long time before we could do that. The
brains learning style and speed even changes over time going from rapid
learning in intuitive way to learning slower and with more stability in a
combo of self- and others-driven way. At some point, our mind is pretty well-
formed where our own thoughts and explorations are driving much of our
behavior. And somewhere between the kid and adult parts we start easily
learning new skills.

There's probably a mix of general, specialized, and stuff in between
algorithms. It changes over time, too, instead of being a static, pre-trained
model.

------
mindgam3
“When the brain builds a model of the world, everything has a location
relative to everything else,” Mr. Hawkins said. “That is how it understands
everything.”

This is an exciting idea to me. I don’t fully understand it, but it resonates
on some intuitive level. Makes me think of spatial audio, echolocation, our
sense of balance, all these forms of intelligence that rely on understanding
space.

Random trivia- I interviewed with Numenta back in 2007. Got an offer, was
excited by the vision, but I didn’t see how they would have any kind of
product within the next 5 years. 11 years later, they’re maybe about to launch
something. Much respect for them pushing through on this wildly ambitious
idea. I’m rooting for them.

~~~
devoply
How is black related to loud?

~~~
tlb
In my mind, black is straight down, while loud is in front of me a little
above eye level, rotated clockwise.

You can map any set of concepts into an N-dimensional space if you have a
similarity metric between them. If some concepts are anchored in space, they
define the overall layout. It could be that loud is related to large, which is
related to tall, which is naturally anchored above eye level.

------
kev009
I do like this all encompassing approach and was a fan of his book "On
Intelligence"

In computer hardware and software terms: we don't necessarily need to clone an
x86 CPU, glue and I/O chips, and run an actual copy of firmware, bootloader,
and Windows. If we understood just one of these, how the x86-like CPU works,
and could implement a functional simulator that works similarly enough that
would be extraordinarily useful. ANNs are a primordial stab at this.

But this article is incredibly light on detail. I hope he does something like
a follow up book to explain what he has learned in the time since.

------
andbberger
I never really know what to make of Hawkin's research, but I'll always be
grateful to him for his contributions to starting the Redwood Center at
Berkeley.

------
westurner
Cortical column:
[https://en.wikipedia.org/wiki/Cortical_column](https://en.wikipedia.org/wiki/Cortical_column)

> _In the neocortex 6 layers can be recognized although many regions lack one
> or more layers, fewer layers are present in the archipallium and the
> paleopallium._

What this means in terms of optimal artificial neural network architecture and
parameters will be interesting to learn about; in regards to logic, reasoning,
and inference.

According to "Cliques of Neurons Bound into Cavities Provide a Missing Link
between Structure and Function"
[https://www.frontiersin.org/articles/10.3389/fncom.2017.0004...](https://www.frontiersin.org/articles/10.3389/fncom.2017.00048/full)
, the human brain appears to be [at most] 11-dimensional (11D); in terms of
algebraic topology
[https://en.wikipedia.org/wiki/Algebraic_topology](https://en.wikipedia.org/wiki/Algebraic_topology)

Relatedly,

"Study shows how memories ripple through the brain"
[https://www.ninds.nih.gov/News-Events/News-and-Press-
Release...](https://www.ninds.nih.gov/News-Events/News-and-Press-
Releases/Press-Releases/Study-shows-how-memories-ripple-through-brain)

> _The [NeuroGrid] team was also surprised to find that the ripples in the
> association neocortex and hippocampus occurred at the same time, suggesting
> the two regions were communicating as the rats slept. Because the
> association neocortex is thought to be a storage location for memories, the
> researchers theorized that this neural dialogue could help the brain retain
> information._

~~~
westurner
Re: Topological graph theory [1], is it possible to embed a graph on a space
filling curve [2] (such as a Hilbert R-tree [3])?

[1]
[https://en.wikipedia.org/wiki/Topological_graph_theory](https://en.wikipedia.org/wiki/Topological_graph_theory)

[2] [https://en.wikipedia.org/wiki/Space-
filling_curve](https://en.wikipedia.org/wiki/Space-filling_curve)

[3] [https://en.wikipedia.org/wiki/Hilbert_R-
tree](https://en.wikipedia.org/wiki/Hilbert_R-tree)

[4] [https://github.com/bup/bup](https://github.com/bup/bup) (git packfiles)

------
Confusion

      before the world can build artificial intelligence, it must
      explain human intelligence so it can create machines that
      genuinely work like the brain
    

Why must intelligent machines work like the human brain? It’s just an
accidental result of evolution and doesn’t _define_ what ‘intelligence’ is?

~~~
rvense
We have no definition of intelligence that doesn't refer to human behaviour.

In part this is a curse on ML/AI, since so many of the terms that are used to
define the subject and goals of the field are pre-scientific, even religious
in origin - consciousness and will are two other examples.

------
nihil75
I'm no expert but Jeff Hawkins' theories seem a bit dated to me.

The concept described in this article is the same one in his book "On
Intelligence", which was written long ago. Before we knew the impact of neural
network arrangement and architecture on its performance.

Examples like that paper that described a task-specific region of the brain
that handles face recognition is not supported by his model.

------
visarga
I don't buy the "one algorithm" idea. I think small changes from region to
region can generate completely different dynamics - for example different
types of neuron cells, different use of neurotransmitters, or different
responses in time. It might look the same under a microscope but hyper-
parameters differ between regions. In other words it is a parametrised family
of algorithms, not just one.

What would interest me more than the one algorithm is to find out the loss
functions that steer our brain into fast learning after birth. We have prior
knowledge about the world encoded in the structure of the brain and our loss
functions (what the brain optimises for). The brain has many such loss
functions - it can't possibly do all it does with just one like neural
networks, and they are genetically evolved. These priors could be readily
transferred to AI models.

~~~
bra-ket
There are no ‘loss functions’, our brain doesn’t optimize in machine learning
sense

~~~
visarga
I think that if it learns, it must have loss functions. We just don't know how
they are implemented. If we did, we could replicate the brain priors. They are
probably distributed, unlike ML loss functions which distill everything to a
single number.

~~~
bra-ket
well if it learns by remembering sets of features and matching new stimuli
against this "database" than we can talk about distance functions in
associative memory, and the problem of optimization reduces to a problem of
matching (as well as generalizing, i.e constructing abstract representations
from simpler ones, as in "chunking"). brain priors would be the distributed
representations of stimulus-response pairs stored in memory

------
alimw
The idea that much (all?) of human thought, need, fear etc. is associated with
(identifiable with?) sensation having definite location in the body, is one
that should be familiar to anyone who has practised meditation as taught to
Vipassana students or basic relaxation as taught to method actors.

And perhaps someone will be able to correct me on this, but didn't Kant treat
spatial awareness as a priori and presumably rather fundamental?

~~~
cosmic_ape
In linguistics there is also work that attempts to understand the formation of
abstractions from sensor-motoric concepts. Here's one example:

[http://fas-philosophy.rutgers.edu/goldman/Spring%202008%20Se...](http://fas-
philosophy.rutgers.edu/goldman/Spring%202008%20Seminar/Gallese%20&%20Lakoff%20%28The%20brain%27s%20concepts%29.pdf)

------
WhitneyLand
“You do not have to emulate the entire brain,” he said. “But you do have to
understand how the brain works and emulate the important parts.”

Nonsense.

What % parts of a bird were necessary to understand for the first successful
aircraft?

It may be very helpful all the way to unnecessary, no one knows that yet.

~~~
blhack
I'm not sure I'm following your point. What percentage of parts of a bird were
necessary for the first successful flight? I don't know, less than 1%? Maybe
less than 0.1%?

But you do have to understand [generally] how a bird works to [generally]
emulate its wings [and fly].

~~~
fermienrico
For every biologically inspired invention, there are plenty more that were
invented by understanding principles of engineering and science.

Evolution has given us some incredible stuff, but there could exist simpler,
more elegant solutions. A barn door will give you lift just not as efficient
as a NACA airfoil shape.

Evolutionary mimicry has its place in engineering, no doubt. But arrogantly
stating that we must emulate biological systems is blatantly and demonstrably
false. Otherwise, we wouldn't have nuclear power, superconductivity,
lightbulb, sewing machines and computers.

~~~
greglindahl
Did the original source actually arrogantly state that? I suspect it had some
nuance to it.

