Hacker News new | past | comments | ask | show | jobs | submit login

For people wanting to look into HTM (Hierarchical Temporal Memory), do check out Numenta's main website [1], in particular the papers [2] and videos [3] sections.

Otherwise, HTM inventor Jeff Hawkins' book "On Intelligence" is one of the top 3 or so most fascinating books I've ever read. It doesn't cover HTM though, just how the brain works at a conceptual level, but in a way I haven't seen anyone else explain. Jeff clearly has an ability to see the forest through the trees in a way that is not too commonly found. This is one of the reasons I think HTM might be on to something, although it of course has to prove itself in real life too.

But we should remember for how long classic Neural Networks was NOT overly successful, and almost dismissed by a lot of people (including my university teacher who was rather skeptical about them, when I took an ML course on like 12 years ago and personally believed a lot in them). We had to "wait" for years and years until enough people were eventually throwing enough work on finding out how to make them really shine.

[1] https://numenta.org/

[2] https://numenta.com/neuroscience-research/research-publicati...

[3] https://www.youtube.com/user/OfficialNumenta

[4] https://www.amazon.com/Intelligence-Understanding-Creation-I...

Edit: Fixed book link.




> we should remember for how long classic Neural Networks was NOT overly successful

We already knew in the late80s/early 90s that neural networks were universal function approximators, and there was an era in the late 80s/early 90s where neural nets were VERY successful (or at least: very influential among the ML circles of their day). Sure, they were dismissed once kernel machines came about, simply because those had more to offer at the time. But it would be a mistake to compare HTM with classical neural networks: neural nets were always known to do something sensible and to "work", even if they might not be the state-of-the-art method.

In stark contrast, HTM has been "out there" for over long a decade by now, with (as far as I know) not a single tangible result, neither theoretical nor practical. They never managed to hobble together even a single paper with credible results, even though they came out with it right at the time where connectionist approaches became popular again (yes, there were papers, but there's a reason they only got published in 2nd or 3rd tier venues). From where I stand, it's a "hot air" technology that somehow seems to stay afloat because the person behind it know how to write popular science books. Everyone researcher I know who tried to make HTM work came away with the same conclusion: it just doesn't.


"Everyone researcher I know who tried to make HTM work came away with the same conclusion"

Is there anything you can share with this? I'd like to read more about how and why researchers came to that conclusion. Thanks.


I was a graduate researcher implementing HTM in a hardware accelerator. The largest problem is there was never any sort of specification for HTM outside of a white paper that only vaguely described it's internal structures. And the picture the white paper painted is a design with N^N different hyperparamaters.

Oh, and then the whole thing output a jumble of meaningless bits that had to be classified, an algorithm Numenta kept hidden away as the secret sauce...but if you have to use a NN classifier to understand the results of your HTM... Too many red flags of snakesoil. And I really wanted it to work. Doesn't help that Jeff Hawkins has largely abandoned HTM for new cognitive algorithm pursuits.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: