For example, the article mentions the ability of clustered synapses to act independently, but , on the one hand, it has been shown independent dendrites can be approximated as an extra neural network layer (so they ARE covered by today's ANN approximation) , and OTOH there s a number of papers showing that synaptic clustering does not exist in sensory areas. And learning by rewiring is basically the introduction of random connections which persist only if their weight increases enough (roughly corresponds the continuous formation of filopodia and the fact that large spines persist longer).
Machine learning at the moment is an empirical science that has made great strides without consulting neuroscience for it. I think that has been a good thing: without having to bend towars some biological plausibility researchers have been more exploratory and creative, which has led to the creation of an empirical body of knowledge from which neuroscience could benefit in the future. OTOH, having watched the field of computational neuroscience there has not been a lot of progress since , basically the 80s. So i believe it would be best to leave each of the two fields go their own way.
I wouldn't really say that. There's a lot of progress being made in using and understanding biological processes for useful tasks. Not surprisingly, the biological mechanisms are surprisingly complex and rich and vary a lot through the brain (like you mentioned with the dendrites. Dendrites also work like coincidence detection mechanisms sometimes in some layers of the cortex  for instance)
 gives a very nice overview of what is needed from both machine learning and computational neuroscience to solve the entire problem of understanding the human brain.
There was the liquid state computing paper  in 2002 which showed that random networks of spiking neurons can perform some computations and have memory even in the absence of special learning rules.
There has also been quite some work on understanding e.g. the role of assemblies of neurons (groups of neurons firing), assembly sequences, plasticity rules, theory formulating neural activity in a probabilistic manner, a better understanding of the role of inhibition, dendrites etc.
And lastly, there have been huge advances in neuromorphic computing, both digital and analog. In some cases, the performance on these chips (that use spiking neurons) approaches that of state of the art machine learning. e.g. 
As for Liquid State Machines, those evolved into Reservoir Computing and Echo State Machines. If you want to read more about them, I would recommend this paper comparing them to the Neural Engineering Framework  (with code!) to get a good idea of the state of the field.
What is your basis for this statement?
Consider Geoff Hinton's publishing record, which contains many collaborations with notable psychologists / neuroscientists (e.g. Jay McClelland) who helped bring neural networks back into the spotlight.
The sense I get is that Machine Learning is mostly an empirical field at the moment, without a terribly solid theoretical underpinning. Which is not, of course, to suggest that ML researchers haven't consulted neuroscience at all. But there still seems to be a pretty big disconnect between neuroscience and ML. Unless I've just really missed something, which is entirely possible.
There seems to be some interesting interplay, though. For example deepmind recently sponsored this ANN conference where most the speakers were neuroscientists.
What I said, exactly, is "We have relationships with some neuroscientists".
You can see the text (https://www.reddit.com/r/artificial/comments/6beeqj/5182017_...) and a video of me talking during the AMA about our relationships with neuroscientists (https://youtu.be/9fk8tg_jqh0?t=43m6s).
The fact is that we have many ongoing relationships with neuroscientists. But I don't know them personally so I didn't give any details. We also have a Guest Scientist program aimed at increasing these collaborations.
Numenta isn't biologically plausible , so although the resulting networks might be mappable to the brain in the same manner as Deep Learning and FORCE trained networks, there's no way to map the learning period onto brains.
Deep learning proponents focus on solving specific problems using mathematical models inspired by biology, whereas HTM proponents argue that biology is important and it should play a more central role. Deep learning folks are more geared towards applied AI, whereas HTM's are way more ambitious and are trying to solve the problem of intelligence/AGI.
Numenta, the company behind HTM's has a platform called NuPic for "intelligent computing": https://github.com/numenta/nupic
But as far as I know HTM's have never been applied to anything non-trivial successfully.
There are plenty of things going on in a real neocortex that are not essential to intelligence and Numenta was working hard from the start to pick out the important ones at an appropriate level of modeling.
This distinction becomes especially important in contrast to things like Henry Markram's projects, where people seem to be happily spending major-multi-million-euro sums of public funds to run poorly understood simulations with insufficient detail on a really big computation cluster.
RNN:s are crude compared to the HTM as well as to the more recent CLA model, but they do share for instance the formulation as a dynamical system with recurrence. Some of publications coming from people at Numenta have made direct comparison between the CLA to for instance LSTM-based networks, and in my opinion they are entirely reasonable to do that. They are all suited to sequence modelling.
Having said that, I agree the comparison to RNN:s isn't to be made without understanding some of what goes into the HTM or CLA, and I agree that Hawkins emphasizes biological intelligence.
Also http://www.cortical.io is pretty dann cool.
I mean, so far we have no biological evidence of backpropagation, and it seems pretty useful.
'But we can't even figure out worms!' - Worms are made up of neurons but they perform a different function from what the neocortex does so studying them is not like studying a simpler problem, it's studying an entirely different one.
'But ML can do X better!' - Unlike industry or academia, Numenta's primary goal is to figure out how the neocortex works, it's not about profit or publications.
'Biology contains details we don't need!' - Numenta's approach is not biologically inspired, it's more like biologically constrained. They avoid implementations that are functionally different from how the neocortex works.
I would highly suggest reading On Intelligence to learn more.
Numenta is literally a corporation...
Huh? Take your pick of creatures that display highly intelligent behaviour and have less complex nervous systems than humans. Insect navigation would be a good example. If we can't figure out the biological basis of that -- when in many cases we already have a good idea of what calculations are being performed -- then our chance of "solving human level intelligence" is zero.
Jeff talks about this exact issue in this speech he gave, 10 years go. He mentions it around the 11 min mark but the whole video is a great intro to what this is all about.
That is pure speculation.
As far as I can tell, what you're suggesting is that it might possibly be easier to figure out how human intelligence works than it would be to figure out how an insect performs a few calculations, in cases where we know almost exactly which calculations are being performed (see e.g. http://science.sciencemag.org/content/312/5782/1965).
I don't see any reason to think that this is true other than wild optimism.
The Neuroscience book by Mark Bear gives a nice introductory overview of the biology behind a lot of neural processes:
If you're interested in computational neural models, I would highly recommend Wolfgram Gerstner's recent book (available for free online): http://neuronaldynamics.epfl.ch
The book by Dayan and Abbot on theoretical neuroscience is quite nice too: https://mitpress.mit.edu/books/theoretical-neuroscience
After this, you'll probably want to read new review papers, since it's a field that's moving quite fast now. (Especially with new measurement techniques in the past few years)
Start with "Fundamentals of Neuroscience" by Harvard, free multimedia course that requires only a high school level education:
then go to
"Medical Neuroscience" on Coursera (the professor is a great teacher):
Especially the 2nd course is pretty big, on of the largest online courses there is in terms of videos to watch and things to learn. But while it is a lot it is much easier than the quantum mechanics course(s) half the size on edX. You don't have to think much, just listen and learn.
Aft hat you have a very solid foundation, now check out more such courses on the same platform.
How arrogant! The features they list aren't unique to humans, or even to mammals. They're all present in the nidopallium of birds as well. Machines won't become intelligent unless they incorporate certain features of intelligent life.
This is the fundamental challenge that we face. Our ability to build AI that can emulate human thought is not limited by a poor understanding of the brain. Our current theoretical understanding of how human cognition arises from neural processes is probably close to a level sufficient to build human level AI. What limits our progress is the staggering computational demand of simulating a massive network of highly dynamic units.
Shortcuts, simplifications and clever algorithms will only get us so far. At this point, processing power rather than theoretical understanding is the limiting factor.
This is not true at all.
The brain of C. elegans (roundworm) has been mapped exactly (connectome is known) and we know it's 302 neurons and 8000 synapses well (it has just 959 cells total) but we still can't fully understand how its primitive brain works. It don't even have spiking neurons and it's still a mystery. It would be relatively straight forward to simulate. There is even software for doing it.
To fully understand how even simple brain works, we must understand gene expression inside brain cells, role of somatic brain mosaicism inside brain, brain chemistry, neural connectome, how cortical columns work, neural coding, etc.
When you get all this right, you need to fine tune it. Hyperparameter optimization for human level AI so that it's not epileptic, autistic, schizophrenic, manic, idiot or AI equivalent of these and million other things must be really hard process.
Frankly, it's even kind of discouraging.
Write a very simple genetic algorithm to evolve artificial neural networks or computer code. Often it can solve simple tasks in creative ways. But trying to understand the output is usually a nightmare. It usually comes up with ridiculously convoluted and insane ways of doing things. E.g; https://www.damninteresting.com/on-the-origin-of-circuits/
>The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest— with no pathways that would allow them to influence the output— yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.
Now imagine a system many times larger and more sophisticated than that, that evolution has been hacking on for hundreds of millions of years.
Doesn't mean we aren't close to human level AI though. We didn't completely reverse engineer birds before building airplanes, or horses before building cars.
What I do believe is that once openworm achieves its goals that progress might become very rapid indeed.
The C. elegans has a small number of heterogenous neurons. Some are quite task specific.
The human brain, on the other hand, has ~100billion. While they do not all have the same structure, within specific regions you will see repeated patterns of homogeneity and redundant encodings.
It may be a harder problem to start with the small system than the large one.
No. Not even remotely close. If that were true we'd be able to emulate the behavior somewhat of simpler organisms that don't have the connectivity of a human brain. The fact of the matter is, we simply don't understand cognition.
I'd say the challenge in realizing strong AI right now is both a lack of understanding of the brain as well as emulating its mode of computation, which is different from a traditional digital computer.
No. And you can clearly show this by reasoning like this: if what you say is true then we would be able to simulate a human cognition system but simply slower than real time.
The fact that we can't even begin to do that no matter at what speed we would like to set this in motion proves that there is no such thing as sufficient theoretical understanding.
To state that even stronger: we are nowhere near close to such understanding. We have roughly the kind of knowledge that you'd be able to glean by taking a hacksaw to a computer, figuring out that there is such a thing as a non-linear element and that you could use this for computation and maybe a basic digital circuit or two.
Immensely useful, but not the level of understanding that we would like to have.
Please state that understanding.
We have a good idea of how neurons interact on a low level; dendritic connections among themselves.
At a higher level, we know the brain regions and how they are connected to one another.
Glial cells are still a bit of a mystery but work is advancing rapidly on that front.
Humans are composed of cells.
Socities are composed of humans.
Now we know everything about how they work!
Umm ... :)
So, at the very least, this stratospherically high-level overview is incomplete.
My sense is that the level of understanding of how human cognition works mathematically speaking is roughly similar to our understanding of how DL works in computer science, in the sense that if you asked a cognitive neuroscientist or psychologist how someone classifies cats versus non-cats, you'd get an answer that would seem pretty similar to what you'd get from a computer scientist. The behavioral scientist might go into a lot more detail about certain issues, but that's because the biology is so entertwined.
However, I'd also argue that we really don't know much about how human (or any animal) cognition works, and I'd also argue that our understanding of DL is fairly poor, in that a lot of it is tinkering and seeing what happens, without a deep understanding of why it works. There isn't a theory of DL in the same way that there's a Martin-Lof theory of randomness, or a Kolmogorov theory of algorithmic complexity, or a Fisherian model of inference.
Also, the sort of tasks currently involved in AI research is a tiny subset of what you encounter in neuroscience and psychology. Most of what is a hot topic in comp sci would basically be classified as perceptual tasks in human behavioral science, maybe at a slightly higher level, and maybe motor control. That leaves things like conscious versus nonconscious processing, reasoning, the role of emotion in decision-making, uncertainty valuation, creativity, etc. etc. etc.
I agree that the processing power is an issue but it's only part of the puzzle.
One thing that illustrates the complexity of the issues involved, and how we've only begun to scratch the surface, is the article's assertion that comp sci should borrow more the idea of sparse representations from cognitive neuroscience. I thought that was interesting, because in a lot of ways, one of the major trends in the last ten years in human neuroscience is away from this "sparseness" idea. It was a common assumption maybe 15 years ago, but now people routinely get excoriated for invoking that idea. The current paradigm is one more where a lot of pathways/circuits are being recruited simultaneously. Statements like "you might use 10,000 neurons of which 100 are active" would lead to ridicule. The intuitive way of explaining the problem is that even while your brain is trying to decide if you're perceiving a cat versus something else, it's also processing the consequences of that decision along about 10 different dimensions, the implications for the rest of the stimuli coming in, along with a number of other things we just don't understand.
Also I don't get why their method cares about sampling rate. It just seems weird.
> A data sampling rate from once per minute to once per hour, with the “sweet spot” being between once per minute and once every five minutes (faster velocity data can be aggregated or sampled as well)
Students guess a few yards, a hundred feet, ten feet...the professor says no no, not even 3 inches.
Now if we can make artificial neural networks that work with 3D data, learning such things as 3D data to value, 3D data to 3D data mappings that would be damn useful. IE, estimating how much it would cost to make something from a CAD model or how aerodynamic a thing is without running costly CFD.
I'd also argue that we don't need 'truly intelligent machines' to "build structures, mine resources, and independently solve complex problems"
Ants and termites are capable of doing similar tasks and I'm doubtful the author considers them 'truly intelligent'.
>it should be possible to design intelligent machines that sense and act at the molecular scale.
>These machines would think about protein folding and gene expression in the same way you and I think about computers and staplers.
>They could think and act a million times as fast as a human.
So if the author means in simulated environments, we are quite slow at simulating molecules. For molecular simulation, we need something like femtosecond(10^-15 s) time steps whereas each time step is on the order of milliseconds. We are trillions of times slower than realtime. This is for completely classical systems, if we take into account quantum effects, it's much longer. Oh and our simulation methods for such things are terrible. Intelligence would help here, but it's not going to be millions of times faster than a human.
Now if they mean videoing what's happening with a microscope and learning from that, well the problem is we don't have a perfect microscope for seeing such things at the nanoscale. So in order to figure out what's going on we have to get creative and make tests for each thing we're trying to analyze.
Now if the author means nanorobots inside cells doing learning and what not, just having such machines would be useful in and of itself. Heck if we could make such things, we wouldn't need to worry about problems such as gene expression or protein folding because we'd be able to make our own damn proteins or our own damn cells for that matter. Even with drexlerian tech doing this sort of machine learning at this scale is pretty ridiculous. Current nanobot designs require something on the order of kilobytes of memory. In addition, gene expression, protein synthesis and folding are slow processes. Average protein synthesis time for eukaryotes is 2 minutes(eons as far as simulating these things is concerned!). So getting this data can't happen much faster than a human can think.