A simple thought experiment may be useful to understand our model. Imagine you reach your hand into a black box and try to determine what object is in the box, say a coffee cup. Using only one finger it is unlikely you could identify the object with a single touch. However, after making one contact with the cup, you move your finger and touch another location, and then another. After a few touches, you identify the object as a coffee cup. Recognizing the cup requires more than just the tactile sensation from the finger, the brain must also integrate knowledge of how the finger is moving, and hence where it is relative to the cup. Once you recognize the cup, each additional movement of the finger generates a prediction of where the finger will be on the cup after the movement, and what the finger will feel when it arrives at the new location. This is the first problem we wanted to address, how a small sensory array (e.g., the tip of a finger) can learn a predictive model of three dimensional objects by integrating sensation and movement-derived location information.
Jeff Hawkins is scheduled to give a talk at the Human Brain Project Open Day in Maastricht, NL about this paper.
The one linked in the parent is from last year, related to the mentioned story of him running his finger across the coffee cup.
Feynman Machine: A Novel Neural Architecture
for Cortical And Machine Intelligence
Details in the previous paper: https://arxiv.org/pdf/1609.03971
Does the Neocortex Use Grid Cell-Like Mechanisms to Learn the Structure of Objects?
Russell Epstein also make a case for a predictive mechanism in "reorientation". Familiarizing ourselves to new physical locations may have an equivalence neurally and cognitively to the kind of "context-switch" that occurs when we switch from using one language to another.
The Neurocognitive Basis of Spatial Reorientation
The idea that the columns main function is to relate things in space makes a lot of sense.
This doesn't follow from above but the idea of a pattern matching machine is certainly pretty great though I do see the parallels with FPGAs and how, even if the structure is the same, the function can be completely different and thus "understanding" the code running is a super hard problem and only in certain ways similar with all people.
But the word “column” implies discontinuous modules. You can actually observe this in sensory cortex of mice and rats, where each whisker has a 1-1 relationship with a “column” of neurons that primarily respond to that whisker. This observation is what originally led neuroscientists to propose the concept of the column as a fundamental unit of organization of the cortex.
But it turned out that lots of other brain regions don’t have this kind of discontinuous organization. Neuroscientists still use the term “cortical column” but they’re really just referring to the fact that vertically aligned neurons seem to be closely connected.
When the Numenta people talk about “cortical columns” they seem to be describing the original idea of cortical columns that didn’t pan out. It’s really weird.
0. Glasser and Van Essen https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4990127/
1. Does the stimuli for which a region is responsive to dictate the cytoarchitecture found in that region? Or conversely, does the cytoarchitecture dictate the response-nature of that region?
2. Can we find commonality in cytoarchitecture for regions which respond to a certain stimuli across species? (Is 'touch' always handled by the same cytoarchitecture of brains regardless of mice/men/mammal/reptile/etc?
3. re: 2. - Is 'touch' always handled by the same cytoarchitecture in all Humans? What, if any, differences in cytoarchitecture may be found in language handling regions for people who speak Japanese vs French? Or deaf vs hearing?
4. Is the mass of a a particular region along with its cytoarchitecture corollary to ones given intelligence
Lots of questions....
2. Yes. This we do have very strong evidence for. Visual, auditory, somatosensory, olfactory, etc. cortex are present in all mammals and the connectivity patterns between them an other brain structures are conserved. Birds do not have laminar cortex but instead have nuclear (or nucleated) cortex where functional units seem to be organized into nuclei or bundles of cells rather than layers. However, the genes, connectivity, and function of these units seems to be strikingly similar to that of laminar mammalian cortex .
3. Cortex is problematic. All vertebrates have basically the same peripheral circuitry for measuring the world. Rods, cones, merkel disks (touch), tastebuds, and olfactory sensory neurons are all highly conserved. So in that sense the answer is yes. We we get to cortex, things change. There is no evidence that there are fundamental cytoarchitectural differences between language. There might be between hearing and deaf, especially if the individual has been deaf from birth. See Carla Shatz work on cortical development .
4. No evidence for this. We don't have a very good understanding of how or even whether certain gross anatomical features are related to intelligence. The Human Connectome Project is probably the closest to having population data that could answer the question. Otherwise the geneticists are way ahead , but we don't know how their results are manifest in the brain.
The other question/statement i have is;
Is there a discrete number of various cytoarchtectural patterns that can/do exist and are they catalogued? Assuming that through various amino acids and proteins we can get a stem cell to present as various tissue types, can we get them the present as a neuron? But then they cytoarchtecture is probably dictated by genetic coding and not amino-acids/proteins, so steering cell clusters to create a particular 'brain-circuit' is decades away, i assume? Unless there is a way to scaffold the desired outcome by putting new stems cells/neurons next to others which are arranged in the desired format already?
2. Yes, people are working on this, but whether the neurons that are created are actually like the ones in a living brain require much more research. The search term for this is 'neural conversion' and the key paper is .
3. It is either much simpler, or much more complicated. Axons send projections along signalling gradients during development and have something akin to a lock and key system (made of cell surface proteins) that help them hit the correct targets. If you can make the signal and design the lock and key, in theory it would work. The issue of course is that we have zero idea what sticking a specific new connection in will do, though there is this paper which suggests that pure connectivity changes can have a gain of function . The other area to look in for stuff related to this is axonal repair (spinal cord repair), and that is where all the theory seems to go out the window because there are so many signals that the neurons listen to. If you dig in there you will see that people have tried scaffolds, signals, stem cells, and all other manor of hocus pocus to get it to work because the search space they are in is terrifyingly huge.
Jeff Hawkins is not a Biologist or a Neuroscientist (though he would probably qualify) trying to explain how the brain works. He is an Engineer using whatever bits he sees as potentially useful.
It's been a bit frustrating that after 15 years they still didn't make headlines with something more practical and groundbreaking, so I'm a bit excited to hear what will be unveiled at this conference tomorrow.
His theories may be right or wrong, but he's doing science right.
So it doesn’t seem to me that the problem is that Hawkins has been uncommunicative, it’s that the wider academic and research community just hasn’t been interested. Perhaps that’s simply because his theories don’t have merit, but it’s also hard to shake the feeling that’s it more because his career, and therefore his theories, hasn’t fit the standard academic mould.
It seems to me that, if you want to be an influential theorist, you’re expected to study at a prestigious university, work with an older, respected researcher while getting your PhD, then become a professor, setup a research lab, mentor some PhD students of your own, then finally maybe spin off your research into some startups, or take a high-paying research position at a big company.
Hawkins hasn’t followed this path, so he has no network of peers and former students to evangelise his work and carry it forward. He’s done research and published it, like the scientific ideal says you should, but he’s missing the structure necessary to make it land in the research world.
It’s like a sort-of reverse cargo cult. He’s done the work of constructing a real airstrip and working planes, but he can’t persuade anybody to fly, because they’ve never seen or heard of air travel before.
If tweets were searchable, I was really taken reading this book and I think I made a pretty grand statement that future generations would compare this book to work like The Origin of Species.
Now I’m interested to dig up my copy and read again :)
Instead of just cutting out the middle bit of getting a PhD, becoming a professor, setting up a research lab, and mentoring students for decades, and leaping straight to "make a startup from an idea that works", every single person decided to go the hard road because it seems proper?
Your cargo cult idea works - he's arranged all the trappings of being an influential theorist - funding and research labs and media coverage - except perhaps for the one thing that lets him fly: a worthwhile idea.
The ideas represented and level of detail in their theory is such that the most likely response from everyone else is just 'Nice theory' and a shrug.
Hawkins and Co. have been working on this for years. Numenta sends frequent newsletters and try to keep people interested, but there is little concrete there except some interesting ideas.
I was a student of linguistics when I discovered it, having started a move from theoretical linguistics to a masters in more computational stuff just as machine learning was beginning to become more widely used. Hawkins was one of a few things that combined to make me so completely disenchanted with the entire field of computational linguistics, especially as it was practiced at my university, that I just left and became a regular software developer. I decided I'd rather have to center things in CSS than throw progressively fancier maths at bag-of-words models and poorly annotated corpora and pretending it had anything to do with thinking machines.
(I don't regret this move at all since another major factor was I didn't want to go do a PhD somewhere far away; I do somewhat regret going into computational stuff in the first place, though - before that I wanted to become one of those Indiana Jones linguists who go to the mountains and write grammars of languages only spoken by a handful of 80-year-olds... that would have been really fun)
I don't think it's provable in this form, you'd also have to specify what "single algorithm" is. Because algorithms themselves are delineated by human categorizing; what you call a "single" algorithm can be subdivided into several simple(r) algorithms, and at the same time can be just one part of what someone else will call "a single algorithm".
Not necessarily. We got a mix of innate head-start and training from parents/others for a really, really, long time before we could do that. The brains learning style and speed even changes over time going from rapid learning in intuitive way to learning slower and with more stability in a combo of self- and others-driven way. At some point, our mind is pretty well-formed where our own thoughts and explorations are driving much of our behavior. And somewhere between the kid and adult parts we start easily learning new skills.
There's probably a mix of general, specialized, and stuff in between algorithms. It changes over time, too, instead of being a static, pre-trained model.
You could also easily have different algorithms for different senses, evolved at various historical stages...
This is an exciting idea to me. I don’t fully understand it, but it resonates on some intuitive level. Makes me think of spatial audio, echolocation, our sense of balance, all these forms of intelligence that rely on understanding space.
Random trivia- I interviewed with Numenta back in 2007. Got an offer, was excited by the vision, but I didn’t see how they would have any kind of product within the next 5 years. 11 years later, they’re maybe about to launch something. Much respect for them pushing through on this wildly ambitious idea. I’m rooting for them.
These are all brainstem functions. No cortex required!
Binaural hearing is basically sorted out by the time you hit thalamus, and most of the heavy lifting is done in the inferior colliculus, medial superior olive and dorsal cochlear nucleus.
You can map any set of concepts into an N-dimensional space if you have a similarity metric between them. If some concepts are anchored in space, they define the overall layout. It could be that loud is related to large, which is related to tall, which is naturally anchored above eye level.
- Heavy metal
- Clanky furnaces
- Stompy boots
Just a few.
In computer hardware and software terms: we don't necessarily need to clone an x86 CPU, glue and I/O chips, and run an actual copy of firmware, bootloader, and Windows. If we understood just one of these, how the x86-like CPU works, and could implement a functional simulator that works similarly enough that would be extraordinarily useful. ANNs are a primordial stab at this.
But this article is incredibly light on detail. I hope he does something like a follow up book to explain what he has learned in the time since.
> In the neocortex 6 layers can be recognized although many regions lack one or more layers, fewer layers are present in the archipallium and the paleopallium.
What this means in terms of optimal artificial neural network architecture and parameters will be interesting to learn about; in regards to logic, reasoning, and inference.
According to "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function"
https://www.frontiersin.org/articles/10.3389/fncom.2017.0004... , the human brain appears to be [at most] 11-dimensional (11D); in terms of algebraic topology https://en.wikipedia.org/wiki/Algebraic_topology
"Study shows how memories ripple through the brain" https://www.ninds.nih.gov/News-Events/News-and-Press-Release...
> The [NeuroGrid] team was also surprised to find that the ripples in the association neocortex and hippocampus occurred at the same time, suggesting the two regions were communicating as the rats slept. Because the association neocortex is thought to be a storage location for memories, the researchers theorized that this neural dialogue could help the brain retain information.
 https://github.com/bup/bup (git packfiles)
before the world can build artificial intelligence, it must
explain human intelligence so it can create machines that
genuinely work like the brain
In part this is a curse on ML/AI, since so many of the terms that are used to define the subject and goals of the field are pre-scientific, even religious in origin - consciousness and will are two other examples.
The concept described in this article is the same one in his book "On Intelligence", which was written long ago.
Before we knew the impact of neural network arrangement and architecture on its performance.
Examples like that paper that described a task-specific region of the brain that handles face recognition is not supported by his model.
What would interest me more than the one algorithm is to find out the loss functions that steer our brain into fast learning after birth. We have prior knowledge about the world encoded in the structure of the brain and our loss functions (what the brain optimises for). The brain has many such loss functions - it can't possibly do all it does with just one like neural networks, and they are genetically evolved. These priors could be readily transferred to AI models.
And perhaps someone will be able to correct me on this, but didn't Kant treat spatial awareness as a priori and presumably rather fundamental?
What % parts of a bird were necessary to understand for the first successful aircraft?
It may be very helpful all the way to unnecessary, no one knows that yet.
But you do have to understand [generally] how a bird works to [generally] emulate its wings [and fly].
Evolution has given us some incredible stuff, but there could exist simpler, more elegant solutions. A barn door will give you lift just not as efficient as a NACA airfoil shape.
Evolutionary mimicry has its place in engineering, no doubt. But arrogantly stating that we must emulate biological systems is blatantly and demonstrably false. Otherwise, we wouldn't have nuclear power, superconductivity, lightbulb, sewing machines and computers.
The point is, history has shown over and over, it may be more or less helpful in building the best general AI we can. We just don't know.
Making an unqualified declaration that it's required is hard not to see as hyperbole as this point in history.
So is his one statement wrong, yes. Do I want to stop him? Heck no, giddy up. I hope he makes big breakthroughs.