Hacker News new | past | comments | ask | show | jobs | submit login
What intelligent machines need to incorporate from the neocortex (ieee.org)
153 points by dendisuhubdy on June 2, 2017 | hide | past | favorite | 50 comments



Recently Matt Taylor from numenta had a reddit AMA (https://www.reddit.com/r/artificial/comments/6beeqj/5182017_...). I was disappointed to find out that numenta does not really have collaborations with neuroscientists to test or adapt their theories. Their theories in general are not well known to computational neuroscientists either. In that sense, i m not even sure about the authority of numenta on neocortical theories.

For example, the article mentions the ability of clustered synapses to act independently, but , on the one hand, it has been shown independent dendrites can be approximated as an extra neural network layer (so they ARE covered by today's ANN approximation) , and OTOH there s a number of papers showing that synaptic clustering does not exist in sensory areas. And learning by rewiring is basically the introduction of random connections which persist only if their weight increases enough (roughly corresponds the continuous formation of filopodia and the fact that large spines persist longer).

Machine learning at the moment is an empirical science that has made great strides without consulting neuroscience for it. I think that has been a good thing: without having to bend towars some biological plausibility researchers have been more exploratory and creative, which has led to the creation of an empirical body of knowledge from which neuroscience could benefit in the future. OTOH, having watched the field of computational neuroscience there has not been a lot of progress since , basically the 80s. So i believe it would be best to leave each of the two fields go their own way.


> OTOH, having watched the field of computational neuroscience there has not been a lot of progress since , basically the 80s. So i believe it would be best to leave each of the two fields go their own way.

I wouldn't really say that. There's a lot of progress being made in using and understanding biological processes for useful tasks. Not surprisingly, the biological mechanisms are surprisingly complex and rich and vary a lot through the brain (like you mentioned with the dendrites. Dendrites also work like coincidence detection mechanisms sometimes in some layers of the cortex [1] for instance)

[2] gives a very nice overview of what is needed from both machine learning and computational neuroscience to solve the entire problem of understanding the human brain.

There was the liquid state computing paper [3] in 2002 which showed that random networks of spiking neurons can perform some computations and have memory even in the absence of special learning rules.

There has also been quite some work on understanding e.g. the role of assemblies of neurons (groups of neurons firing), assembly sequences, plasticity rules, theory formulating neural activity in a probabilistic manner, a better understanding of the role of inhibition, dendrites etc.

And lastly, there have been huge advances in neuromorphic computing, both digital and analog. In some cases, the performance on these chips (that use spiking neurons) approaches that of state of the art machine learning. e.g. [4]

[1] http://www.sciencedirect.com/science/article/pii/S0166223612...

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021692/

[3] http://www.mitpressjournals.org/doi/abs/10.1162/089976602760...

[4] http://www.pnas.org/content/113/41/11441.abstract


Nice references! I have some criticism of "Toward an Integration of Deep Learning and Neuroscience" which I've talked to the authors about. I felt like they downplayed the importance of integration between neuroscience and machine learning [1], while over-playing the biological plausibility of Deep Learning [2]. I'm also uncomfortable with the characterization of Neuromorphic computing as "Deep Learning on a Chip", as this dismisses the possibility of online learning and neural dynamic systems [3], which I think are necessary for truly intelligent systems.

As for Liquid State Machines, those evolved into Reservoir Computing and Echo State Machines. If you want to read more about them, I would recommend this paper comparing them to the Neural Engineering Framework [4] (with code!) to get a good idea of the state of the field.

[1] https://medium.com/@seanaubin/deep-learning-is-almost-the-br...

[2] https://cogsci.stackexchange.com/q/16269/4397

[3] https://medium.com/@seanaubin/a-way-around-the-coming-perfor...

[4] https://github.com/arvoelke/delay2017/blob/master/delay2017....


Definitely people are working, but we dont have an addition to our methods of something fundamental. The papers you re referencing provide interesting conceptual theories, but none of them will survive for decades the way Hodkin-huxley's or Rall's models did. A lot of work on dendrites goes back to the 90s (e.g. seminal work of Mel http://www.nature.com/neuro/journal/v7/n6/abs/nn1253.html) and we still dont have a convincing theory of what they do.


> Machine learning at the moment is an empirical science that has made great strides without consulting neuroscience for it.

What is your basis for this statement?

Consider Geoff Hinton's publishing record, which contains many collaborations with notable psychologists / neuroscientists (e.g. Jay McClelland) who helped bring neural networks back into the spotlight.

http://www.cs.toronto.edu/~hinton/papers.html


True. But, conversely, Hinton himself says in some of his videos (sorry, don't have the specific link in front of me) something to the effect of "ANN's are only very loosely modeled on real neurons and we make no pretense that this is anything like the way a real brain works". (Paraphrased, possibly poorly).

The sense I get is that Machine Learning is mostly an empirical field at the moment, without a terribly solid theoretical underpinning. Which is not, of course, to suggest that ML researchers haven't consulted neuroscience at all. But there still seems to be a pretty big disconnect between neuroscience and ML. Unless I've just really missed something, which is entirely possible.


Yeah, that's fair. The goals of machine learning and neuroscience seem fairly different. Nobody in ML will complain if your neural network has great prediction but isn't supported biologically. OTOH, neuroscientists might take a model that isn't the best in terms of prediction, if it seems to better represents how the brain operates.

There seems to be some interesting interplay, though. For example deepmind recently sponsored this ANN conference where most the speakers were neuroscientists.

https://sites.google.com/site/ncpw15/


I think its fair to say that Hinton's justifications are post-hoc. I think a lot of neural network scientists also do that, as there are many basic components that dont have a biological analogue or even contradict biology, like backpropagation, LSTMs, RBM training layer-by-layer, semilinear activation functions. I can't think of an instance for example where the designers made a choice of connectivity "because that is how the brain does it" and they ended up having a better performing network. There have been recently attempts to match ANNs with spiking networks (e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021692/), but only rough sketches of ideas are provided.


Hi, this is Matt Taylor from Numenta.

What I said, exactly, is "We have relationships with some neuroscientists".

You can see the text (https://www.reddit.com/r/artificial/comments/6beeqj/5182017_...) and a video of me talking during the AMA about our relationships with neuroscientists (https://youtu.be/9fk8tg_jqh0?t=43m6s).

The fact is that we have many ongoing relationships with neuroscientists. But I don't know them personally so I didn't give any details. We also have a Guest Scientist program aimed at increasing these collaborations.


> I was disappointed to find out that numenta does not really have collaborations with neuroscientists to test or adapt their theories. Their theories in general are not well known to computational neuroscientists either. In that sense, i m not even sure about the authority of numenta on neocortical theories.

Numenta isn't biologically plausible [1], so although the resulting networks might be mappable to the brain in the same manner as Deep Learning and FORCE trained networks, there's no way to map the learning period onto brains.

[1] https://forum.nengo.ai/t/hierarchical-temporal-memory-vs-nef...


For those interested, Jeff Hawkins wrote a book called On Intelligence and he is largely the guy behind: https://en.wikipedia.org/wiki/Hierarchical_temporal_memory

Deep learning proponents focus on solving specific problems using mathematical models inspired by biology, whereas HTM proponents argue that biology is important and it should play a more central role. Deep learning folks are more geared towards applied AI, whereas HTM's are way more ambitious and are trying to solve the problem of intelligence/AGI.

Numenta, the company behind HTM's has a platform called NuPic for "intelligent computing": https://github.com/numenta/nupic

But as far as I know HTM's have never been applied to anything non-trivial successfully.


I recently read On Intelligence, very interesting take on the core principles of the brain. Perhaps someone more knowledgeable can comment on this, but it feels like deep recurrent networks do a pretty good job of capturing the concepts that Jeff Hawkins considers really important: hierarchy/invariance and sequences/prediction. Plus they're really effective in practice!


The concept Hawkins considers really important above all else is the biology. He rejects deep learning as being a fundamentally different form of computation than the one the brain performs. There are are no parallels to be drawn between RNN and HTM's, really. Apples and oranges. HTM's goal is literally to try and emulate the neocortex.


"Emulate the neocortex" is at best a metaphoric shorthand that will be misunderstood by many, and quite likely the wrong way of thinking about it. It is certainly not to be taken literally.

There are plenty of things going on in a real neocortex that are not essential to intelligence and Numenta was working hard from the start to pick out the important ones at an appropriate level of modeling. This distinction becomes especially important in contrast to things like Henry Markram's projects, where people seem to be happily spending major-multi-million-euro sums of public funds to run poorly understood simulations with insufficient detail on a really big computation cluster.

RNN:s are crude compared to the HTM as well as to the more recent CLA model, but they do share for instance the formulation as a dynamical system with recurrence. Some of publications coming from people at Numenta have made direct comparison between the CLA to for instance LSTM-based networks, and in my opinion they are entirely reasonable to do that. They are all suited to sequence modelling.

Having said that, I agree the comparison to RNN:s isn't to be made without understanding some of what goes into the HTM or CLA, and I agree that Hawkins emphasizes biological intelligence.


They built a infrastructure monitor using the HTM https://grokstream.com

Also http://www.cortical.io is pretty dann cool.


And perhaps what the neocortex needs to incorporate from intelligent machines: backpropagation?

I mean, so far we have no biological evidence of backpropagation, and it seems pretty useful.


I have followed Hawkin's theory from its inception ca. 2005, since then they have been through multiple evolutions: from hierarchical bayesian models (with then collaborator Dileep George) and now an Sparse representation with Subutai. However, they have struggled with a "killer app". Barring some toy examples, very little by way of real-world use-cases.


Strongly believe that if anyone is going to solve human level intelligence in our lifetime, it's going to be Numenta.

'But we can't even figure out worms!' - Worms are made up of neurons but they perform a different function from what the neocortex does so studying them is not like studying a simpler problem, it's studying an entirely different one.

'But ML can do X better!' - Unlike industry or academia, Numenta's primary goal is to figure out how the neocortex works, it's not about profit or publications.

'Biology contains details we don't need!' - Numenta's approach is not biologically inspired, it's more like biologically constrained. They avoid implementations that are functionally different from how the neocortex works.

I would highly suggest reading On Intelligence to learn more.


"Unlike industry or academia, Numenta's primary goal is to figure out how the neocortex works, it's not about profit or publications."

Numenta is literally a corporation...


>Worms are made up of neurons but they perform a different function from what the neocortex does so studying them is not like studying a simpler problem, it's studying an entirely different one.

Huh? Take your pick of creatures that display highly intelligent behaviour and have less complex nervous systems than humans. Insect navigation would be a good example. If we can't figure out the biological basis of that -- when in many cases we already have a good idea of what calculations are being performed -- then our chance of "solving human level intelligence" is zero.


The problem is in defining intelligence by behavior. What we see as intelligence in insects most likely originates from an entirely different process.

Jeff talks about this exact issue in this speech he gave, 10 years go. He mentions it around the 11 min mark but the whole video is a great intro to what this is all about. https://www.youtube.com/watch?v=G6CVj5IQkzk


> What we see as intelligence in insects most likely originates from an entirely different process.

That is pure speculation.

As far as I can tell, what you're suggesting is that it might possibly be easier to figure out how human intelligence works than it would be to figure out how an insect performs a few calculations, in cases where we know almost exactly which calculations are being performed (see e.g. http://science.sciencemag.org/content/312/5782/1965).

I don't see any reason to think that this is true other than wild optimism.


How would one go about self-studying neuroscience to the level of say, a second year grad student not yet specializing in anything? That is to say, familiar with the basic concepts and able to make sense of current research given enough time.


Well it depends on which part you are interested in.

The Neuroscience book by Mark Bear gives a nice introductory overview of the biology behind a lot of neural processes: https://www.amazon.com/Neuroscience-Exploring-Mark-F-Bear/dp...

If you're interested in computational neural models, I would highly recommend Wolfgram Gerstner's recent book (available for free online): http://neuronaldynamics.epfl.ch

The book by Dayan and Abbot on theoretical neuroscience is quite nice too: https://mitpress.mit.edu/books/theoretical-neuroscience

After this, you'll probably want to read new review papers, since it's a field that's moving quite fast now. (Especially with new measurement techniques in the past few years)



That and Larry Squire's Fundamental https://www.amazon.com/Fundamental-Neuroscience-Fourth-Squir... are the bibles. Don't be intimidated by the size of the books they are actually easy to read.


Udacity, coursera, edx, YouTube, Wikipedia, google should have you covered.


Free excellent courses:

Start with "Fundamentals of Neuroscience" by Harvard, free multimedia course that requires only a high school level education:

https://www.mcb80x.org/

then go to

"Medical Neuroscience" on Coursera (the professor is a great teacher):

https://www.coursera.org/learn/medical-neuroscience

Especially the 2nd course is pretty big, on of the largest online courses there is in terms of videos to watch and things to learn. But while it is a lot it is much easier than the quantum mechanics course(s) half the size on edX. You don't have to think much, just listen and learn.

Aft hat you have a very solid foundation, now check out more such courses on the same platform.


"Machines won’t become intelligent unless they incorporate certain features of the human brain."

How arrogant! The features they list aren't unique to humans, or even to mammals. They're all present in the nidopallium of birds as well. Machines won't become intelligent unless they incorporate certain features of intelligent life.


>"Intelligent machines don’t have to model all the complexity of biological neurons, but the capabilities enabled by dendrites and learning by rewiring are essential. These capabilities will need to be in future AI systems."

This is the fundamental challenge that we face. Our ability to build AI that can emulate human thought is not limited by a poor understanding of the brain. Our current theoretical understanding of how human cognition arises from neural processes is probably close to a level sufficient to build human level AI. What limits our progress is the staggering computational demand of simulating a massive network of highly dynamic units.

Shortcuts, simplifications and clever algorithms will only get us so far. At this point, processing power rather than theoretical understanding is the limiting factor.


>Our current theoretical understanding of how human cognition arises from neural processes is probably close to a level sufficient to build human level AI.

This is not true at all.

The brain of C. elegans (roundworm) has been mapped exactly (connectome is known) and we know it's 302 neurons and 8000 synapses well (it has just 959 cells total) but we still can't fully understand how its primitive brain works. It don't even have spiking neurons and it's still a mystery. It would be relatively straight forward to simulate. There is even software for doing it.

To fully understand how even simple brain works, we must understand gene expression inside brain cells, role of somatic brain mosaicism inside brain, brain chemistry, neural connectome, how cortical columns work, neural coding, etc.

When you get all this right, you need to fine tune it. Hyperparameter optimization for human level AI so that it's not epileptic, autistic, schizophrenic, manic, idiot or AI equivalent of these and million other things must be really hard process.

---

http://www.openworm.org/

http://bluebrain.epfl.ch/page-52741-en.html

http://cajalbbp.cesvima.upm.es/


Thanks for reminding us of openworm. The fact that scientists are still working on simulating an animal with barely a thousand cells surely relativizes claims about simulating a mammalian brain, let alone a human one.

Frankly, it's even kind of discouraging.


Meh, evolution is known to write utterly incomprehensible spaghetti code. It's really not surprising that reverse engineering it is so hard.

Write a very simple genetic algorithm to evolve artificial neural networks or computer code. Often it can solve simple tasks in creative ways. But trying to understand the output is usually a nightmare. It usually comes up with ridiculously convoluted and insane ways of doing things. E.g; https://www.damninteresting.com/on-the-origin-of-circuits/

>The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest— with no pathways that would allow them to influence the output— yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.

Now imagine a system many times larger and more sophisticated than that, that evolution has been hacking on for hundreds of millions of years.

Doesn't mean we aren't close to human level AI though. We didn't completely reverse engineer birds before building airplanes, or horses before building cars.


It's one of the reasons I totally do not believe in https://en.wikipedia.org/wiki/Human_Brain_Project

What I do believe is that once openworm achieves its goals that progress might become very rapid indeed.


As a glimmer of hope, consider this: it may be easier to decode a much larger system with redundancies than the highly evolved and tightly optimized 302 neuron system.

The C. elegans has a small number of heterogenous neurons. Some are quite task specific.

The human brain, on the other hand, has ~100billion. While they do not all have the same structure, within specific regions you will see repeated patterns of homogeneity and redundant encodings.

It may be a harder problem to start with the small system than the large one.


I think it's more important to choose the right level of detail for what you're modelling. This is covered in "The Use and Abuse of Large Scale Brain Models" [1].

[1] https://github.com/arvoelke/delay2017/blob/master/delay2017....


> Our current theoretical understanding of how human cognition arises from neural processes is probably close to a level sufficient to build human level AI.

No. Not even remotely close. If that were true we'd be able to emulate the behavior somewhat of simpler organisms that don't have the connectivity of a human brain. The fact of the matter is, we simply don't understand cognition.

I'd say the challenge in realizing strong AI right now is both a lack of understanding of the brain as well as emulating its mode of computation, which is different from a traditional digital computer.


> At this point, processing power rather than theoretical understanding is the limiting factor.

No. And you can clearly show this by reasoning like this: if what you say is true then we would be able to simulate a human cognition system but simply slower than real time.

The fact that we can't even begin to do that no matter at what speed we would like to set this in motion proves that there is no such thing as sufficient theoretical understanding.

To state that even stronger: we are nowhere near close to such understanding. We have roughly the kind of knowledge that you'd be able to glean by taking a hacksaw to a computer, figuring out that there is such a thing as a non-linear element and that you could use this for computation and maybe a basic digital circuit or two.

Immensely useful, but not the level of understanding that we would like to have.


>Our current theoretical understanding of how human cognition arises from neural processes is probably close to a level sufficient to build human level AI.

Please state that understanding.


Generally:

We have a good idea of how neurons interact on a low level; dendritic connections among themselves.

At a higher level, we know the brain regions and how they are connected to one another.

Glial cells are still a bit of a mystery but work is advancing rapidly on that front.


I see.

Humans are composed of cells.

Socities are composed of humans.

Now we know everything about how they work!

Umm ... :)


And the whole slew of chemicals flying around that effect pretty much every interaction in the entire system.


I second what EGreg said. You have the additional problem that "brain regions" aren't composed of cells, they're composed of circuits, and neurons themselves come in multiple kinds that have different roles in neural circuits. Then you get the problem that saying, "it's circuits built into regions built out of neurons" doesn't make any predictions about the system as a whole.

So, at the very least, this stratospherically high-level overview is incomplete.


I do research in the behavioral sciences. I've published a little on analysis of neural activity, although it's not my area of specialty, but I do specialize in analyses that are cognate to DL models and are often covered in textbooks on DL along with classical DL models.

My sense is that the level of understanding of how human cognition works mathematically speaking is roughly similar to our understanding of how DL works in computer science, in the sense that if you asked a cognitive neuroscientist or psychologist how someone classifies cats versus non-cats, you'd get an answer that would seem pretty similar to what you'd get from a computer scientist. The behavioral scientist might go into a lot more detail about certain issues, but that's because the biology is so entertwined.

However, I'd also argue that we really don't know much about how human (or any animal) cognition works, and I'd also argue that our understanding of DL is fairly poor, in that a lot of it is tinkering and seeing what happens, without a deep understanding of why it works. There isn't a theory of DL in the same way that there's a Martin-Lof theory of randomness, or a Kolmogorov theory of algorithmic complexity, or a Fisherian model of inference.

Also, the sort of tasks currently involved in AI research is a tiny subset of what you encounter in neuroscience and psychology. Most of what is a hot topic in comp sci would basically be classified as perceptual tasks in human behavioral science, maybe at a slightly higher level, and maybe motor control. That leaves things like conscious versus nonconscious processing, reasoning, the role of emotion in decision-making, uncertainty valuation, creativity, etc. etc. etc.

I agree that the processing power is an issue but it's only part of the puzzle.

One thing that illustrates the complexity of the issues involved, and how we've only begun to scratch the surface, is the article's assertion that comp sci should borrow more the idea of sparse representations from cognitive neuroscience. I thought that was interesting, because in a lot of ways, one of the major trends in the last ten years in human neuroscience is away from this "sparseness" idea. It was a common assumption maybe 15 years ago, but now people routinely get excoriated for invoking that idea. The current paradigm is one more where a lot of pathways/circuits are being recruited simultaneously. Statements like "you might use 10,000 neurons of which 100 are active" would lead to ridicule. The intuitive way of explaining the problem is that even while your brain is trying to decide if you're perceiving a cat versus something else, it's also processing the consequences of that decision along about 10 different dimensions, the implications for the rest of the stimuli coming in, along with a number of other things we just don't understand.


Each of those higher abstractions the brain is processing us because the neocort3x is a filtered hierarchy that communicates up and down at each layer. The result of the lower levels trigger and input into the level above it. The higher levels project an expectation of the next result to the layer below themselves. While the lowest level might be concerned with identifying edges of lines, the layer above it is identifying letters and above that words, sentences, concepts, meaning, how that is similar to other things, etc. Each of these layers is active at a moment in time, but the communcation interface between each layer is a sparse distributed representation.


numenta makes my kook sense tingle... but Jeff Hawkins did found the Redwood Center for Theoretical Neuroscience (now part of HWNI), and for that I am grateful.


Yeah, the ratio of bold statements to demonstrated effectiveness seems a little high. And after all their talk about embodied sensorimotor whatnots, HTM doesn't seem that good for, well, actually doing things, compared with mainstream deep learning (eg. AI Gym stuff.)

Also I don't get why their method cares about sampling rate. It just seems weird.

> A data sampling rate from once per minute to once per hour, with the “sweet spot” being between once per minute and once every five minutes (faster velocity data can be aggregated or sampled as well)


"The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim." --Dijkstra (1984) The threats to computing science (EWD898).


Some professor asks students on the first day of neuroscience, if all there is to know about the brain is a mile long, how far have we gotten so far?

Students guess a few yards, a hundred feet, ten feet...the professor says no no, not even 3 inches.


>Our discovery is that every region of the neocortex learns 3D models of objects much like a CAD program. Citation needed. If this was the case, certainly we'd have some experiments that show this? Especially since every part of the neocortex is supposed to be doing it.

Now if we can make artificial neural networks that work with 3D data, learning such things as 3D data to value, 3D data to 3D data mappings that would be damn useful. IE, estimating how much it would cost to make something from a CAD model or how aerodynamic a thing is without running costly CFD.

I'd also argue that we don't need 'truly intelligent machines' to "build structures, mine resources, and independently solve complex problems"

Ants and termites are capable of doing similar tasks and I'm doubtful the author considers them 'truly intelligent'.

>it should be possible to design intelligent machines that sense and act at the molecular scale. >These machines would think about protein folding and gene expression in the same way you and I think about computers and staplers. >They could think and act a million times as fast as a human. So if the author means in simulated environments, we are quite slow at simulating molecules. For molecular simulation, we need something like femtosecond(10^-15 s) time steps whereas each time step is on the order of milliseconds. We are trillions of times slower than realtime. This is for completely classical systems, if we take into account quantum effects, it's much longer. Oh and our simulation methods for such things are terrible. Intelligence would help here, but it's not going to be millions of times faster than a human.

Now if they mean videoing what's happening with a microscope and learning from that, well the problem is we don't have a perfect microscope for seeing such things at the nanoscale. So in order to figure out what's going on we have to get creative and make tests for each thing we're trying to analyze.

Now if the author means nanorobots inside cells doing learning and what not, just having such machines would be useful in and of itself. Heck if we could make such things, we wouldn't need to worry about problems such as gene expression or protein folding because we'd be able to make our own damn proteins or our own damn cells for that matter. Even with drexlerian tech doing this sort of machine learning at this scale is pretty ridiculous. Current nanobot designs require something on the order of kilobytes of memory[1]. In addition, gene expression, protein synthesis and folding are slow processes. Average protein synthesis time for eukaryotes is 2 minutes[1](eons as far as simulating these things is concerned!). So getting this data can't happen much faster than a human can think.

[0]http://people.umass.edu/bioch623/623/Second.Section/7.%20CoT... [1]http://www.rfreitas.com/Nano/Microbivores.htm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: