Otherwise, HTM inventor Jeff Hawkins' book "On Intelligence" is one of the top 3 or so most fascinating books I've ever read. It doesn't cover HTM though, just how the brain works at a conceptual level, but in a way I haven't seen anyone else explain. Jeff clearly has an ability to see the forest through the trees in a way that is not too commonly found. This is one of the reasons I think HTM might be on to something, although it of course has to prove itself in real life too.
But we should remember for how long classic Neural Networks was NOT overly successful, and almost dismissed by a lot of people (including my university teacher who was rather skeptical about them, when I took an ML course on like 12 years ago and personally believed a lot in them). We had to "wait" for years and years until enough people were eventually throwing enough work on finding out how to make them really shine.
Edit: Fixed book link.
We already knew in the late80s/early 90s that neural networks were universal function approximators, and there was an era in the late 80s/early 90s where neural nets were VERY successful (or at least: very influential among the ML circles of their day). Sure, they were dismissed once kernel machines came about, simply because those had more to offer at the time. But it would be a mistake to compare HTM with classical neural networks: neural nets were always known to do something sensible and to "work", even if they might not be the state-of-the-art method.
In stark contrast, HTM has been "out there" for over long a decade by now, with (as far as I know) not a single tangible result, neither theoretical nor practical. They never managed to hobble together even a single paper with credible results, even though they came out with it right at the time where connectionist approaches became popular again (yes, there were papers, but there's a reason they only got published in 2nd or 3rd tier venues). From where I stand, it's a "hot air" technology that somehow seems to stay afloat because the person behind it know how to write popular science books. Everyone researcher I know who tried to make HTM work came away with the same conclusion: it just doesn't.
Is there anything you can share with this? I'd like to read more about how and why researchers came to that conclusion. Thanks.
Oh, and then the whole thing output a jumble of meaningless bits that had to be classified, an algorithm Numenta kept hidden away as the secret sauce...but if you have to use a NN classifier to understand the results of your HTM... Too many red flags of snakesoil. And I really wanted it to work. Doesn't help that Jeff Hawkins has largely abandoned HTM for new cognitive algorithm pursuits.
There's also a recent Lex Fridman podcast episode where he interviews Jeff Hawkins on this theme: https://lexfridman.com/jeff-hawkins/
I beg your pardon sir! The standards are relatively OK
At least some people have seen something of more value than just marketing there, and worth exploring.
But I guess that the focus on Numenta now is on new deeper ideas, mainly related to grid cells and processing of spatial data on mini-columns
Wouldn't that be like basing something on the LuaJIT-based