

Jeff Hawkins talk on modeling neocortex and its impact on machine intelligence - psawaya
http://www.numenta.com/htm-overview/education/jeff-2010-smithgroup-lecture.php

======
TheEzEzz
I watched the whole thing and like the philosophy of the approach.
Unfortunately there isn't a single demonstration of the software, nor a direct
comparison between the software and other machine learning algorithms.

At one point Hawkins even says he's not going to waste time by talking about
specific data sets they've applied the software to because there are too many!
Then pick one! Pick the best!

~~~
hadronzoo
Vitamin D's software is built atop Numenta's platform:
<http://vitamindinc.com/demo.php>

~~~
ogrisel
AFAIK this is not based on the new architecture introduced in this talk.

------
nswanberg
Hawkins has also given some great talks on starting a company. He has an
interesting take on entrepreneurship in that he views it as a tool of last
resort to be used when you can't get something made within an existing
organization:

<http://ecorner.stanford.edu/author/jeff_hawkins>

------
weeksie
This was really fun to watch. I read On Intelligence, by Jeff Hawkins several
years ago and wondered when or if Numenta was ever going to get off the
ground. It looks like they're getting close to their goal. Yeah, there were no
real demonstrations, but this looks a lot more promising than previous
AI/Neural Network stuff.

------
brendano
Have they compared their algorithm against standard well-known algorithms for
the tasks they claim to solve? Last time I checked they still hadn't done
that, or at least hadn't reported any such results. Without that, they're not
worth anyone's time.

------
evolvingstuff
Very interesting video, these concepts seemed to have progressed a lot since
last I checked in on them.

That being said, I'm finding it very difficult to find any objective
comparisons of these algorithms to other, more mainstream machine learning
techniques. In the talk, he gave the impression that there was a tremendous
amount of data to back up these claims, and that he just didn't have time to
present it all. I went through many of the white papers available on the
Numenta website. Many were just overall outlines of the approach. A few of
them demonstrate tasks for which some form of learning is occurring, however,
it was hard for me to know, in the absence of objective comparisons to other
techniques, just how good the results really are.

So far, the only objective comparison I could find involved handwritten
character recognition, and that was against what appeared to be a standard
feed forward neural network with only a single hidden layer. Not exactly state
of the art.. why not compare to SVMs, convolutional NNs, deep belief nets,
etc..?

So I am at this point hopeful, but fairly skeptical. If nothing else these are
some inspiring ideas.

------
cgs
Coincidentally, I just finished On Intelligence, and I found it pretty mind-
blowing. If you're interested in learning and the brain, read it! You can also
read about the HTM algorithm they are working on here:
<http://www.numenta.com/htm-overview/education.php> According to the first
paper, there's enough detail there for you to implement the algorithm
yourself. Cool!

~~~
possibilistic
Don't be fooled by approaches like this. HTM is an oversimplification that
doesn't bring us any closer to real machine intelligence. I read the book a
few years back when I was taking courses in machine learning and
metaheuristics, and I recall being impressed. After picking up a molecular
biology background, however, I've become skeptical of any claims to model
"algorithms" after the brain or neuron or neocortex. Regardless of the level
of abstraction chosen by the investigator, it isn't enough.

To put it simply, I strongly feel that achieving any kind of biologically-
inspired intelligent agent will require a systems biologic approach where we
model every minute molecular detail in silico. This isn't an undertaking that
we even have the technology for at present. We don't have the raw speed, level
of parallelism, or even the molecular/cell physiologic details necessary to
model even parts of the brain. (Even of Drosophila!)

The Blue Brain project is nice and is worth following for nothing other than
learning best engineering practices for developing the architecture behind
something of this scale and complexity--but every simplification we make
introduces error. (Imagine patching it! Imagine the "oops" moment, when some
molecular mechanism doesn't work as we expected--and that's a regularly
occurring event.) I'm not even sure how much simplification we can make before
the emergent properties of the brain no longer function. Some of my colleagues
say membrane potentials and the cytoskeletal system have key quantum
interactions that encode state information--something we don't even understand
yet. (I can't comment much on that, since I haven't studied quantum physics.)

I'm actually learning to develop algorithms that will focus on the interplay
of the genomic machinery (promoters/enhancers, tx, translation,
chaperones/folding, etc), biochemical pathways and kinetics, concentration
levels, receptors, etc. in the hopes that one day we will be able to model
systems like the brain. But from my limited knowledge, a project on the brain
scale will only succeed after we solve the "much less complicated" problems:
cancer, alzheimers, and aging, all of which are all cell-level problems.
That's where we have to focus at present--and you can see how much more
remains to be done.

~~~
StavrosK
I completely agree with you. I asked one of my professors at my Machine
Learning masters course about this, and he said "if I've never heard of it,
how good can it be?"

Algorithms that replicate biological processes are popular because people can
easily grasp them. Anyone can understand why evolutionary systems work, or why
neural networks work (because it's in nature, dummy!), but they don't work as
well as other, purely mathematical methods.

In the end, these sorts of things tend to be toys or marginally useful, where
other, more mathematically sound algorithms dominate the landscape. I got very
excited about HTMs too, when I didn't know as much about ML as I do now, but
I've realised that it hasn't made even a dent in academic circles (you know,
the ones with the thousands of people who study these things for a living).

~~~
Dn_Ab
I also agree but while neural networks are a sort of black box they are also
entirely mathematical. Also some of the most impressive current research is in
the area of deep learners, an example of which is a type of neural network:
RBMs and Deep Belief Nets. In particular Deep Belief Nets have the added
advantaged of being able to display their interal abstractions and state and
so are not so boxed up.

~~~
StavrosK
I will concede that, but they're a very roundabout way to building a
classifier. An SVM, for example, is really much simpler and works much, much
better than a neural network. I'm not familiar with RBMs and DBNs, so I can't
comment on that, sadly...

~~~
ckcheng
SVMs may make a lot of intuitive sense due to the intuitive nature of the max-
margin method of building a classifier, but that doesn't make them "much, much
better" than a neural network.

Neural networks are "universal" function approximators in the sense that given
sufficient number of hidden layer units and training data, they can
approximate any Borel measurable function. This makes them theoretically
powerful. (There are issues with this "universalness" claim though when doing
cognitive modelling. See "Cognition and the computational power of
connectionist networks", Hadley at
[http://www.informaworld.com/smpp/content~db=all?content=10.1...](http://www.informaworld.com/smpp/content~db=all?content=10.1080/09540090050129745))

In practice, the best handwritten digits classifier is still neural networks.
Lecun keeps scores at <http://yann.lecun.com/exdb/mnist/>.

Are there challenges with training neural networks? Certainly. But for many
applications, there are good reasons to use a neural network. For instance,
inference in a neural network is fast, and the learnt model is small in size
(in feedforward networks, just a few large matrices).

SVMs on the other hand are "easier" to train, but care has to be taken to use
the "right" kernel, and the learnt model in practice has to, in many cases,
carry around with it up to half or more of the training data set (ie, the
support vectors), making the model bigger in size, and inference slower
(although there is work in making a sparser set of support vectors. See "An
Effective Method of Pruning Support Vector Machine Classifiers", Liang, at
[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5342...](http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5342443)).

I know neural network research is kinda old school these days, has been
overhyped in the past, and SVM research is kinda "in" nowadays, but that's no
reason to think SVM is "much, much better" than ANN.

~~~
StavrosK
Hmm, I didn't know that about handwritten digits, thanks. I agree that you
have to carry around the SVMs, but I've gotten better results with an SVM.
That said, it might have just been a suitable problem.

------
sleight42
Or a YouTube video of a presentation from earlier this year:
<http://www.youtube.com/watch?v=TDzr0_fbnVk#>.

Flash video is sadly unfriendly to my iPad (where I am now)

~~~
SiVal
So, "Flash video is sadly unfriendly" to your welcoming little iPad? And the
Mac needed its own proprietary browser, because others were so unfriendly.
Talk about a reality distortion field....

------
icandoitbetter
Jeff Hawkins began a few decades ago by criticizing all those neural-net
researchers who were all promises and no results and (surprise!) he's become
one of them now.

~~~
sleight42
Apparently false. See the above comment:
<http://news.ycombinator.com/item?id=1945885>

~~~
icandoitbetter
Oh-come-on. With the same logic, neural-nets were applied in a thousand and
one domains. The problem of object tracking has been solved a thousand times
in the past. Even if Numenta's algorithm is better at it than every other
solution, you still can't call this a _major result_!

Even Hawkins himself has admitted that his hierarchical network design hasn't
caused the huge bang that he expected it to cause.

