

Jeff Hawkins on the Limitations of Artificial Neural Networks - slacka
http://thinkingmachineblog.net/jeff-hawkins-on-the-limitations-of-artificial-neural-networks/

======
modeless
"I am certain that HTM networks applied to applications like vision will not
exhibit the problems talked about in this paper. Brains won’t either."

Put up or shut up, I say. If you have a model that works better, demonstrate
it. ANNs simply crush every competitor in real-world performance.

Modeling brains is a cool idea but it doesn't have any practical advantages
unless it improves learning performance or has medical applications. Neither
is true of any ANN competitor so far.

Honestly, I think we're much more likely to eventually understand the brain by
first focusing on building working AI systems of our own design and then
comparing them to the brain (with the help of our AIs!), rather than reverse
engineering the brain first and building systems that mimic it.

~~~
sgk284
He has "put up".

He's been doing and funding neuroscience research for over a decade and has
built some impressive predictive systems. Leading researchers, such as Andrew
Ng (the Director of the Stanford Artificial Intelligence Lab), have stated
that Jeff's publications have directly influenced their thinking.

That said, I agree machine intelligence will initially be the equivalent of
building machines that fly without flapping, but Jeff's approaches don't
forbid this (he often deviates from nature in his models, but tries to stay
close).

~~~
modeless
> has built some impressive predictive systems

Do you have any links to examples? I'm not aware of any systems built with
HTMs that I'd consider "impressive" relative to what ANNs are doing these
days, e.g. learning to play Atari games by watching the screen [1],
approaching human performance on object classification in unrestricted images
[2], or learning to execute simple python programs by reading the source code
[3].

[1] [https://medium.com/the-physics-arxiv-blog/the-last-ai-
breakt...](https://medium.com/the-physics-arxiv-blog/the-last-ai-breakthrough-
deepmind-made-before-google-bought-it-for-400m-7952031ee5e1)

[2] [http://karpathy.github.io/2014/09/02/what-i-learned-from-
com...](http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-
against-a-convnet-on-imagenet/)

[3] [http://arxiv.org/abs/1410.4615](http://arxiv.org/abs/1410.4615)

~~~
sgk284
Oh don't get me wrong, the things being done with ANNs are incredibly
impressive. Far more so than anything I've seen with HTMs to date. I was just
pointing out that Jeff _is_ putting his money where his mouth is. Also, much
of the resurgence in interest in AI came about 12 years ago in large part due
to Jeff's thinking and writings (which was mostly consolidating existing ideas
into a cohesive theory), as Andrew Ng and others have stated[1]. Jeff helped
kick AI out of a plateau and avoid another AI winter.

For interesting things that Numenta has done, just check their website[2].
Their most popular product is likely Grok[3], which predicts things like
server loads. It's nothing that ANNs couldn't do, and ultimately isn't that
impressive at face value, but the implementation is interesting and has far
reaching implications (w/regards to things like energy efficiency and other
areas).

I think the future holds incredible things for ANNs and we'll continue to see
great breakthroughs with them, but HTMs hold a lot of promise too. ANNs have
been around quite a bit longer and have orders of magnitude more researchers
working with them, so I'd argue it's premature to dismiss HTMs just because a
handful of researchers in Redwood, CA haven't solved every AI problem within
12 years.

It was only 15-20 years ago that ANNs were considered to have peaked and been
surpassed by bayesian networks and support vector machines. How times have
changed since then!

[1]
[http://www.forbes.com/sites/roberthof/2014/08/28/interview-i...](http://www.forbes.com/sites/roberthof/2014/08/28/interview-
inside-google-brain-founder-andrew-ngs-plans-to-transform-baidu/)

[2] [http://numenta.com/](http://numenta.com/)

[3] [http://numenta.com/grok/](http://numenta.com/grok/)

------
antimora
I noticed that Jeff Hawkins quite frequently lacks details and scientific
backing on his claims. I read his book "On Intelligence" and followed his
talks and I have been inspired by them because his explanations are highly
intuitive but when it comes to demonstrating the awesomeness of his approach
it's less tangible.

------
ckaygusu
I have chosen deep learning as the topic of my graduation thesis and lately
and I had the opportunity to study the evolution of artificial neural
networks. I think deep learning is having it's descending period on the hype
curve in an interesting way, it is interesting because the prominent figures
of the field are delivering it. If my memory serves me well, this is the third
article brought to HN within last month on the topic of "deep learning does
not represent how real brain works and is actually unsuitable for artificial
general intelligence".

On one of his seminars Andrew Ng talks about "the algorithm": he pointed out
that human brain may re-wire itself to handle different tasks, for example one
part of the brain which is responsible for sight might take up the task for
hearing. The idea of ANN undisputedly inspired from research investigating how
brain actually works, and now it has set the state-of-the-art on many areas it
has been applied, but to do this we had to resort to "ugly hacks" in the
perspective of "the algorithm", to name some of them would be Autoencoders and
Restricted Boltzmann Machines. It works, yes it works very well, but we had to
pay a price of taking a detour from finding "the algorithm".

In my opinion, such banterings as in OP should not be taken as an aggressive
stance against deep learning. Just because it is not "the" method doesn't mean
it has no value whatsoever, but it has a point; after the limitations of deep
learning is explored well, we will need a new idea to push the field further.

~~~
CompleteSkeptic
State-of-the-art results no longer require unsupervised pretraining with
autoencoders or RBMs, but back when unsupervised pretraining was more popular,
top researchers were rationalizing that it was consider more biologically
plausible than the standard nets trained with back prop, since brains
generalize through observing a large amount of data over their lifetime to
quickly recognize new objects and since the nets aren't trained for a specific
task, they would hopefully generalize better and be a step closer to general
intelligence.

------
rohunati
I met with the head of the Data Science institute at NYU a few years ago (not
LeCun). I don't remember the specifics of our conversation, but I remember at
one point I mentioned Jeff Hawkins (I had just read his book "On
Intelligence") and I remember he responded with some comments that were
politely dismissive of Hawkins.

Some of the other comments here seem to share the same sentiment. I really
loved his book, but I haven't really followed his work for the past few years.
Anyone know what he's been up to in ML/AI lately?

~~~
oxtopus
NuPIC, the Open Source project upon which Grok (Numenta's commercial product)
is built has an active community at
[http://numenta.org/lists/](http://numenta.org/lists/). We had a workshop and
hackathon a few weeks ago, more details can be found at
[http://numenta.com/learn/](http://numenta.com/learn/) and
[http://numenta.org/blog/2014/10/30/2014-fall-hackathon-
outco...](http://numenta.org/blog/2014/10/30/2014-fall-hackathon-
outcome.html), respectively. See, also,
[https://github.com/numenta/nupic.research](https://github.com/numenta/nupic.research)
for the latest research efforts,
[https://github.com/numenta/NAB](https://github.com/numenta/NAB) for
forthcoming benchmarks, and of course
[http://numenta.org/](http://numenta.org/) for overall details about NuPIC and
[http://numenta.com/](http://numenta.com/) for Numenta, the company.

------
jxcole
I am by no means an AI professional or neurobioligist, but I have read
recently that a major problem with neural networks is that they do not have
glial cells which, among other things, move synapses around and helps with
synapse growth.

There are around 100 billion neurons in the brain and 100 trillion synapses.
This indicates to me that synapses may be more important to biological neural
networks than neurons are, and yet most ANN research has been focused
exclusively on neurons (I once asked my teacher which neurons to connect to
each other and recommended using a GA, which is not a bad idea but it
indicates to me we have a very poor understanding of how synapses aught to
work to make better ANNs.)

I disagree with the overly pessimistic view of this article however. To say
that our standard neural networks are not nearly as complex as biological ones
would be accurate, but this hardly means that ANNs are doomed forever, we just
need more research into the things that matter.

------
Teodolfo
Hawkins has no credibility in machine learning so I don't think people should
really care about what he says. To the extent that anything he _does_ say
about the field makes sense, it isn't novel. Unfortunately our society reveres
the super wealthy and treats them as experts on everything.

~~~
sillyme33
>Hawkins has no credibility in machine learning so I don't think people should
really care about what he says.

Says the most respected name in the field of machine learning, Teodolfo.

Srsly. Why the hate Teo? He merely said exactly what everyone from Michael
Jordan to Andrew Ng has said; There's nothing very 'neural' about ANNs.

ANNs work. No doubt. But they are limited in what they can do. Jeff is merely
pitching a new approach that is intended to solve some of those limitations.

But since you don't care what he has to say, I doubt you have a very firm
understanding of what he has said.

