Hacker News new | past | comments | ask | show | jobs | submit login
Numenta Platform for Intelligent Computing (github.com/numenta)
82 points by martinlaz 21 days ago | hide | past | favorite | 23 comments



For people wanting to look into HTM (Hierarchical Temporal Memory), do check out Numenta's main website [1], in particular the papers [2] and videos [3] sections.

Otherwise, HTM inventor Jeff Hawkins' book "On Intelligence" is one of the top 3 or so most fascinating books I've ever read. It doesn't cover HTM though, just how the brain works at a conceptual level, but in a way I haven't seen anyone else explain. Jeff clearly has an ability to see the forest through the trees in a way that is not too commonly found. This is one of the reasons I think HTM might be on to something, although it of course has to prove itself in real life too.

But we should remember for how long classic Neural Networks was NOT overly successful, and almost dismissed by a lot of people (including my university teacher who was rather skeptical about them, when I took an ML course on like 12 years ago and personally believed a lot in them). We had to "wait" for years and years until enough people were eventually throwing enough work on finding out how to make them really shine.

[1] https://numenta.org/

[2] https://numenta.com/neuroscience-research/research-publicati...

[3] https://www.youtube.com/user/OfficialNumenta

[4] https://www.amazon.com/Intelligence-Understanding-Creation-I...

Edit: Fixed book link.


> we should remember for how long classic Neural Networks was NOT overly successful

We already knew in the late80s/early 90s that neural networks were universal function approximators, and there was an era in the late 80s/early 90s where neural nets were VERY successful (or at least: very influential among the ML circles of their day). Sure, they were dismissed once kernel machines came about, simply because those had more to offer at the time. But it would be a mistake to compare HTM with classical neural networks: neural nets were always known to do something sensible and to "work", even if they might not be the state-of-the-art method.

In stark contrast, HTM has been "out there" for over long a decade by now, with (as far as I know) not a single tangible result, neither theoretical nor practical. They never managed to hobble together even a single paper with credible results, even though they came out with it right at the time where connectionist approaches became popular again (yes, there were papers, but there's a reason they only got published in 2nd or 3rd tier venues). From where I stand, it's a "hot air" technology that somehow seems to stay afloat because the person behind it know how to write popular science books. Everyone researcher I know who tried to make HTM work came away with the same conclusion: it just doesn't.


"Everyone researcher I know who tried to make HTM work came away with the same conclusion"

Is there anything you can share with this? I'd like to read more about how and why researchers came to that conclusion. Thanks.


I was a graduate researcher implementing HTM in a hardware accelerator. The largest problem is there was never any sort of specification for HTM outside of a white paper that only vaguely described it's internal structures. And the picture the white paper painted is a design with N^N different hyperparamaters.

Oh, and then the whole thing output a jumble of meaningless bits that had to be classified, an algorithm Numenta kept hidden away as the secret sauce...but if you have to use a NN classifier to understand the results of your HTM... Too many red flags of snakesoil. And I really wanted it to work. Doesn't help that Jeff Hawkins has largely abandoned HTM for new cognitive algorithm pursuits.


Hawkins also has a new book coming. His first book (as said in other comments) is a fantastic read.

https://www.amazon.com/Thousand-Brains-New-Theory-Intelligen...


For those interested, the book probably expands on the following paper from a few months ago: https://numenta.com/blog/2019/01/16/the-thousand-brains-theo... (paper linked to from blog post)

There's also a recent Lex Fridman podcast episode where he interviews Jeff Hawkins on this theme: https://lexfridman.com/jeff-hawkins/


Thanks for posting this, I've just pre-ordered it. I didn't know he was coming out with a new one until I saw your post. I really liked reading On Intelligence.


Not to be too mean about it, but I feel like this is an instance of brilliant marketing more than anything. The founder of Numenta knows how to communicate to Engineers in a convincing way. Neuroscientists (and Science in general) has a way of politely ignoring outsiders and in any case Computational neuroscience doesn’t have terribly high standards of rigor and quality anyways.


It's the opposite, they are so bad at marketing that they are completely ignored by both neuroscience and machine learning communities. Even though they are exactly in between these two fields, and their ideas are good. The company is still alive only because the founder funds it himself.


How is this brilliant marketing? Most people have never heard of Nummenta and they’ve managed to do that during a massive hype cycle in AI.


For context: the founder of Numenta is Jeff Hawkins https://en.wikipedia.org/wiki/Jeff_Hawkins


> and in any case Computational neuroscience doesn’t have terribly high standards of rigor

I beg your pardon sir! The standards are relatively OK


IBM has been doing some research on Hierarchical Temporal Memory (the concept behind Numenta's stuff), via "IBM Cortial Learning Center" [1].

At least some people have seen something of more value than just marketing there, and worth exploring.

[1] https://nice.sandia.gov/documents/2015/NICE3talkwwwFinal-NO%...


Though there doesn't seem to be much activity there since 2015.


True.


Lex Fridman has a fantastic interview with the Numenta's (and Palm's) founder, Jeff.

https://lexfridman.com/jeff-hawkins/


I fiddled around with it before Google open sourced tensorflow. Seems like the project is almost entirely abandoned-no commits for a year now. I suppose this may have something to do with the death of the lead developer Matt Taylor several months ago(R.I.P.) but I could be wrong. It would be interesting to see if someone could get it up to speed.


It is community maintained here:

https://github.com/htm-community

But I guess that the focus on Numenta now is on new deeper ideas, mainly related to grid cells and processing of spatial data on mini-columns


Python 2 only, project in maintenance mode? Why post this what's the hacker news?


It looks like this may be the new implementation:

https://github.com/htm-community/htm.core


This is interesting because it's an implementation of Hierarchical Temporal Memory (HTM), that is a theory about how a biological brain works in terms of memory processing, why we learn in sequences, how a single memory is distributed in many areas. See also: https://en.wikipedia.org/wiki/Hierarchical_temporal_memory


As part of this older project, there is a nice benchmarking/comparison example for several types of anomaly detection for sequence data. If you need to do that, looking at this example is worthwhile.


Why would I want to mess with that now when it depends on Python 2.7?

Wouldn't that be like basing something on the LuaJIT-based Torch7?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: