Saw Jeff Hawkins talk at Streangeloop years ago, and I thought his book was brilliant. What has this company ever accomplished in machine learning/ai? I have seen a few videos on YouTube teaching about HMT theory but it’s not clear that this theory even results in anything that is even marginally useful. What am I missing?
In modern deep learning there are multiple approaches that can be argued are close to some of inspirations from neuroscience: Capsule Networks, Helmholtz Machines, Energy Based Models (Score Based Generative Modeling / Diffusion Models), Associative Memory.
Information theory, Bayesian methods, Approximate Computations are more relevant for inspiration. Neuroscience is not the filed which studies intelligent behavior.
An HTM layer requires far more parameters than a traditional deep neural network, on the order of gigabytes for one or two basic HTM layers. A cortical column is going to have many such layers, and a 1000-brains model will have many (thousands?) of cortical columns. In my opinion, the underlying idea is fantastic but the practical aspects of implementing it are daunting.
That's what killed deep learning in the 90s right? The ideas were there, they were just impractical, and now they're Python libraries on a laptop. Could be similar here albeit I don't know anything around the viability of their HTM layers!
I'm not sure why you think HTM layers are bigger than modern DL layers. The HTM layer configuration used in the paper (B=128, M=32, N=2048, and K=40) is 335M parameters. Compare to GPT-3 with 96 layers, where each layer has 1.8B parameters. Much larger models than GPT-3 have already appeared with no end in sight as to how much more they can scale.
The point is, if HTM worked, people would throw compute resources at it, just like they do with DL models. But it doesn't.
Yeah, Numenta is not going anywhere. DL has taken over the serious applications a while back, and Numenta doesn't have anything really substantial to offer compared to DL.