
A Framework for Intelligence and Cortical Function Based on Grid Cells - doener
https://www.biorxiv.org/content/early/2018/10/13/442418
======
jamesblonde
Grid cells is, according a learned colleague of mine, an epiphenomenon. If so,
this is not a viable path, as it is really just the grandmother cell theory in
disguise.

I like Hawkin's work. I think he has good instincts, and he was probably the
first to hammer home the point that all the brain does is make predictions and
there are a small number of evolutionary mechanisms at work in the brain.
Still, that doesn't make him right here.

~~~
clickok
I am not sure that he's the first to note that the brain is a predictive
machine, or to notice that models of state or the environment can be
formulated entirely in terms of predictions[0].

Personally, I find that idea very convincing, because we already know how to
learn to predict (reinforcement learning), and we can use the same framework
to _optimize_ an agent's behavior to maximize against these predictions. There
is some evidence that the brain uses something like temporal difference
methods to learn.

The hard parts are: how to do this in a stable and efficient matter, and how
to build up our representation to go from relatively simple goals (e.g.,
survive and collect resources) to more abstract goals (e.g., translate a poem
in such a way that the original's charm is preserved[1]). That is, you want to
have a system that is capable of climbing up Maslow's hierarchy (or something
like it) and actually proceeds to do so based on emergent properties of the
system itself.

Proposing and investigating new ways of constructing learning systems so that
there is a continuum between primal drives and complex abstract goals is hard,
and even if grid cells are an epiphenomena, then it is worth studying them to
see how they might develop from simpler rules. If Hawkins' team can make
something useful, then that can still be beneficial by allowing for more
powerful and flexible AI, even if it's not a theory of everything.

\---

0\. See for example:
[http://web.eecs.umich.edu/~baveja/Papers/psr.pdf](http://web.eecs.umich.edu/~baveja/Papers/psr.pdf)
(Littman, Sutton, and Singh), which describes and analyzes the equivalence
between belief-state based representations and state representation based on
predictions.

~~~
jamesblonde
You have a point that the grandmother cell theory appeared first. Got people
thinking. Then the grandmother cell theory people themselves came out and said
'hey, we were wrong - we think it is sparse coding'

[https://en.wikipedia.org/wiki/Alternative_explanations_of_th...](https://en.wikipedia.org/wiki/Alternative_explanations_of_the_%22grandmother%22_cell)

Something similar could happen here. But then, it won't be grid cells as they
are. Will it be that location is coded in every sparse representation? I don't
know...

------
lucidrains
Numenta also put out a video recently explaining this new theory
[https://www.youtube.com/watch?v=LNRZD9YJCdI](https://www.youtube.com/watch?v=LNRZD9YJCdI)

------
jamiek88
Ah the new paper from Hawkins et al.

Thanks, OP this will be Sunday morning reading at its finest.

The ideas from his 2007 book have stuck with me and it will be very
interesting to see where the state of their art is.

~~~
giardini
What "ideas from his 2007 book" are those? I read the book and my conclusion
was that there was nothing new in it. (Sorry if this bursts someone's bubble)

~~~
neuronexmachina
I don't think there was much new in it, but (intentionally or not) it was a
pretty good layman's introduction to the preceding decade or so of thought
about predictive behavior in neural systems. Of course, Hawkins tended to
imply that it was all his own ideas, but if you ignore that it's a pretty good
overview.

------
p1esk
Related:
[https://news.ycombinator.com/item?id=18214707](https://news.ycombinator.com/item?id=18214707)

------
drjeezy
Despite being mentioned everywhere, the term "object" is never really formally
defined at all.

What does the brain even consider an "object"? The paper is fundamentally
flawed by missing this definition, since their idea is entirely reliant on the
definition of an object.

Are there some common things in the physical world that constitute the basic
atomic building blocks of objects according to the brain? If so, this needs to
specified, otherwise all of this theory is totally vague...

How is the statement that "the cortex is capturing the relative positions of
objects" in the slightest bit novel at all? Obviously mammals are capable of
navigation, hunting, etc etc. Basically any interaction with your environment
requires immediately knowing where all objects are, so obviously this is
critically encoded within our learned representations of the world.

~~~
manuka
Object is a set of features at location. According to this video
[https://www.youtube.com/watch?v=zVGQeFFjhEk](https://www.youtube.com/watch?v=zVGQeFFjhEk)

------
ASalazarMX
This is way beyond what I can understand, but got me thinking: if a complex
neocortex finally gets emulated and trained (current ML techniques) to a human
level, how hard do you think it would be to implant primary directives?

Robot brains in Asimov's SF stories were designed mathematically, and they
were mathematically proved incapable of defying the three laws. Our robots
would be less certain, as its design would include too much randomness to
manipulate with absolute certainty. OTOH, even Asimovian robots could defy the
three laws if they knew enough philosophy.

~~~
monocasa
IMO, any intelligence smart enough to interpret the three laws is smart enough
to have have an interpretation in a way you didn't expect.

The three laws are an interesting thought experiment, but not an interesting
solution.

------
sova
So the memory palace is literally how functions are organized across the
neocortex, neat. Well done, ancient Greece

------
rayalez
I highly recommend checking out "On Intelligence" by Jeff Hawkins [1] - an
amazing book where the author of this paper elaborates on his theory (as of
2004).

Also, I'm currently reading "I Am a Strange Loop" by Douglas Hofstadter [2],
it's relevant to this topic and super fascinating. I bet you'll enjoy reading
it!

[1] [https://www.audible.com/pd/On-Intelligence-
Audiobook/B002V8L...](https://www.audible.com/pd/On-Intelligence-
Audiobook/B002V8LKTE)

[2] [https://www.audible.com/pd/I-Am-a-Strange-Loop-
Audiobook/B07...](https://www.audible.com/pd/I-Am-a-Strange-Loop-
Audiobook/B07HJ9NHHM)

~~~
zanek
Thanks for linking to “I am a strange loop”, I’m a big fan of Doug H

I’ve been waiting for it to be released on Audible for years. They just added
it

------
Rainymood
I just skimmed the paper and there are literally 0 (zero) equations in there,
is this normal for these kinds of papers?

~~~
willeh
I'd say it is normal. Generally you wouldn't find equations describing some
proposed model, instead you would find "equations" describing the statistical
observations. This paper, however, isn't empirical. It instead proposes a
theoretical framework, based on the earlier studies. It argues (hopefully
soundly) from the premises of the quoted studies to the conclusions it
reaches. The point of the research isn't to present this is exactly how this
behaves in mathematical terms, or even suggest a mathematical model for how it
could work. Instead it presents a semi-formal description of how it could
work, so someone can come up with an experiment trying to establish if it
really does work that way. Once some evidence is obtained, you go from there
to the more mathematical models.

Its hard to formalise things especially in fields were you're far removed from
the maths. So rather than spending a ton of time making a model that is
probably fundamentally wrong, you make an approximation in informal language.
Scientific discovery is in my mind is always a balancing act between verifying
propositions (which don't really _mean_ anything) and sharing thoughts (which
don't really _prove_ anything).

In cognitive neuroscience semi-formal descriptions are great. In comp-sci,
give me the equation already!

