What I miss in these notes, and what I think is the main question we should be asking ourselves, is the question of the black-box approach. That is, do we understand what intelligence amounts to? On the one hand, we want to be able to say yes but on the other hand, we haven't been able after all these years to formalize the concept of intelligence in a satisfactory or even meaningful way. So the answer seems to be no. Moreover, do we even know what intelligence is?
Thus, should we instead be building black-box systems that behave seemingly intelligently without understanding why they behave intelligently? I.e. neural networks, deep learning? Is this a good investment? It seems so, at least in the short term. But what about the long term? I don't know the answer.
Having said that, I've been working for some time on non-monotonic reasoning and related subjects, which are typical symbolic logic approaches to AI, and I believe that what I am doing, while intellectually very worthwhile, is a dead end in the context of AI. I'm sometimes surprised at how easily we get money to get to work on this stuff. So I agree with the point that lack of funding is not the problem.
Could you please elaborate on why do you believe it is a dead end? I'm learning about symbolic logic and I would love to have some pointers on the limitation of the approach... From what I've read to date, it's just too expensive to have a proper rule set, but I haven't found a good discussion on the limitations of the approach!
John von Neumann was busy dying. Anyway he disapproved of the Newell-Simon work on chess, probably of AI in general.
made me laugh.
When McCarthy had just died I remember a video from a fellow that worked with him somewhere talking about how until the day of his death he had massive amounts of bandwidth piped into his house, and that the fellow wondered what he did with it all, sort of like a mad scientist.
I still wonder sometimes when I think about him. I sort of like to think that he was busting out work towards the 'obstacle' points in this paper 'til the very end.
The human brain has about 100 billion neurons. The human genome is 770MB (in .2bit format)... So, we know the genome doesn't have even close to enough information to describe all of the connections within the brain... This means that the brain must be an emergent phenomenon.. There must be some gross structure, or general mechanisms described in the genome and the rest is left to learning...
Andrew Ng at Stanford built the largest neural network to date, with 11.2 billion parameters (which I'm going to take to mean 11.2 billion neurons, as I'm assuming the parameters are the neuron weights)... So we're still pretty far off of human numbers of neurons... In addition, humans have the be bathed in sensory input for years before they begin to show intelligence..
Did anyone ever suppose the genome could possibly describe the functioning of the brain? After all, learning to play an instrument doesn't change your genome, and who could even take a stab at estimating how many bits of storage it takes to "store" mastery?
I took it to mean that there isn't enough storage in the genome to encode the initial full linkage of neurons, not that learning would have to be written back to the genome in order to change the ongoing network.
We will have article with same title, just years changed to 2045 and 2015 respectively. And probably one or two 30 year periods after that too. People underestimate the complexity of intelligence because of how common sense it seems to us.
It shouldn't actually be hard, especially when you look at how our brains lead to a recognizably human intelligence. Problem is, as has been pointed out by Jeff Hawkins, no one in AI research care(d/s) to look at biology to see what might be learned.... So much heat and funding without light.
Edit: this was meant somewhat ironically to get the point across that we can't be expected to succeed with AI unless we know how HI actually works.
The problem with that is, we still dont really know how the brain works either. Yes we know roughly how neurons work, but how does that lead to the making of a decision? No clue. The best we can do is look at various activation patterns in the anterior cingulate cortex and wonder. It's like trying to figure out how a CPU works by measuring fluctuations in its temperature and power draw as it performs calculations.
Well, yes, which is the point. If we don't know how our brains do this intelligencing, then how can we expect any program or machine to do the same? This is a part of the failure of AI research. Can't put the cart before the horse. Well, you can, but it probably won't work.
Well it is possible that human-like intelligence is a lot harder to build, and that you are unnecessarily complicating your journey by asking for human-like intelligence straight off the bat as a requirement of definition.
Like, I would think an AI that can perform capricious causal modelling from sensory or experimental data is already really sexy, even if it couldn't match up to human intelligence, or if it wasn't built in the same way as a brain.
Or, an AI that can perform capricious maps or analogies between situations.
I'm not suggesting a HI emulation, actually. If you're interested, you can get a gist of what I'm suggesting by reading Hawkins "On Intelligence". The basic building blocks of intelligence in the brain will be the principles on which legitimate, workable AI research programs will have their first start. From there I'd anticipate radicalization and innovation based on a profound understanding of those principles which will have been shown to work. But we have to do that by understanding what intelligence actually is - and the best available model we have for that is human intelligence, and thus the human brain.
And the closest to solving this we seem to be at present is fuelled by deep learning, which is basically a big neural network with an absurdly vast amount of neurons (i.e., parameters for learning, like the brain). We can observe how this brute-force technique works, but unfortunately no-one can explain why (it's a black box model). The same story has been on for decades.
I would relate it to being a discriminative model, which is tailored to solving a specific task, in contract to generative models, which try to model and explain the world. Perhaps the brain is not meant to understand how the world works but how to do take advantage of it.
Deep learning models definitely don't need an "absurdly vast amount of neurons" - for example, GoogLeNet (arguably the state of the art image classification model) has only ~6M parameters.
There's a whole branch of CompSci/AI exactly to do with this, known (sometimes) as "Bio-Inspired Systems". It is as false to say that nobody is looking to nature for inspiration as it is to say that human-level intelligence is a trivial problem.
If you can substantiate your assertion then there are people waiting to give you enormous piles of money!
> Edit: this was meant somewhat ironically to get the point across that we can't be expected to succeed with AI unless we know how HI actually works
That's only true if the way HI works is the only reasonable way at achieve AI.
Biological solutions can be a good inspiration for some problems, but not always. See the film Gizmo, which is the subject of another story currently on the first couple of pages of HN, for some footage of what happened when people tried to base aviation too closely on what birds do.
I can't see any reason that it is not plausible that someday, after we do have AI, the sentence "we can't be expected to succeed with AI unless we know how HI actually works" will be regarded similarly to the way the sentence "we can't be expected to build vehicles that travel 60 mph [1] unless we know how cheetahs actually work" would be regarded now.
seriously? no one in AI is looking at the human brain to try and understand intelligence? this is the most inaccurate comment I've seen in 2015, although it's a little too early in the year to call it a winner.
Actually, the comment is fairly accurate. Very few people in AI are reading neuroscience papers, and even fewer are trying to implement the ideas from those papers in software. Those who do, are not really doing AI, but rather writing brain simulators, and not trying to perform any intelligent tasks. The closest examples that come to mind (aside from Jeff Hawkins) are Chris Eliasmith and guys behind Leabra.
My PhD program was in neuroinformatics and it exactly was this intersection, it was not an accurate comment. Hinton knows neurophysiology, its clear in any of his talks he reads about the weird illusions that give us insight into the computational mechanisms going on in the inside. (e.g. how we recognize rotated objects but with symmetry unresolved)
Computational mechanisms that Hinton uses in his ML models have very little to do with what is going on in brains. It's kind of like watching a bird fly and building a jet or a helicopter. Hawkins' comment was about building a machine that "flies like a bird" - actually looking at what's going on in real brains. Hinton seems to be inspired not by brain physiology or anatomy, but by psychology.
What about you? Have you implemented in software any of the ideas you learned from looking at the brain? I mean, the software designed to perform some intelligent task?
This also reminds me of the attempt by a researcher to develop an AI to solve bongard problems. He apparently gave up for supposed ethical reasons: http://www.foundalis.com/soc/why_no_more_Bongard.html . I suspect other reasons (i.e., couldn't actually create a program that would do so), but it isn't altogether important what they are. I'd like to see a real AI solve such problems - and it's going to take understanding HI to do that.
Using your analogy I would guess Geoff Hinton's goal is to understand aerodynamics and which parts of bird flight are actually necessary for flight and which are simply accidents of evolution.
Hinton does not really try to understand how brain does it. His latest ideas on capsules are a little bit closer to the brain anatomy, but still very far away from real brains. I think he admits that himself.
Jeff Hawkins is the guy who actually tries to understand "which parts of bird flight are actually necessary for flight".
Hinton explicitly states he was intreiged that we can recognise an upside down R, but not whether its reflected, and made a neural model that leveraged the same weakness
That indicates he reads psychophysics papers and he uses the same loosening of problem constraints in his visual recognition models. That is drawing on cross discipline insights.
I don't care he doesn't model ion channels or fine grained neuroanatomy, he is inspired by the coarse grained computational tricks employed in biology. For a primarily AI researcher thats an ideal level of abstraction.
He is looking at human behavior, sure. He is trying to copy some of the specifics of that behavior in his systems (like "pose" represented by columns of neurons. However, it still seems that he treats human brain as a black box, and he's not trying to understand the mechanisms of computation and information processing in the real brains - and I'm not talking about low level stuff like ion channels, I'm talking things like prediction, anomaly detection, or knowledge representation.
If you believe that the best way to build AI is to model human brain, then we need to look inside the brain, and Hinton does not do that.
If you are interested in the intersection of neuroscience and AI, Jeff Hawkins' HTM theory is the best we got so far. Unfortunately, most people talking about it can't be bothered to actually learn it. Just read HTM white paper [1], and decide for yourself.
I've read it. Just not a good enough programmer to implement it. Maybe in time that will change. It's really exciting; I just wish people with the requisite skills would try to understand it more fully.
i've been reading about sparse distributed memory and numenta/nupid for the past few days. It sounds very interesting but I haven't actually seen any demos that show it solving things better than other avenues (like deep learning) in AI at the moment. Are you using it for anything?
Numenta's primary goal is not solving practical problems (such as those typical in ML field) - it's understanding how neocortex processes information. They believe that once they understand enough, they will be able to build a system that works like a brain (and then it will be able to solve any problems that a brain can solve).
Obviously, this is work in progress. They might have a breakthrough in two years, or in two decades, but the point is that not many others are trying to do that at all (definitely no one in ML field).
Those are interesting projects, and sooner or later we will need specialized hardware to speed up AI algorithms.
However, today AI development is not constrained by computing power, at least not directly. If you run IBM Watson algorithms 100 times faster, it won't be much smarter. In ML, we need better algorithms. In AI, we need better theories.
Brain simulations, such as Human Brain Project, is where we need more computing power. That is the field where 100 times faster can lead to major breakthroughs.
As I'm sure you're aware, Dileep George left Numenta to start Vicarious. He seems to be taking it in a slightly different direction with a focus on Bayesian-style probabilistic inference - something which may simplify the HTM framework for mainstream use. This looks promising and has considerably more funding, but there is apparently little focus on how our intelligence supervenes on our brains at Vicarious....
Pardon, I think I just have a broader definition than you do. I consider Brain Simulators to fall firmly the rubric of AI. It's just that they're trying to take a single step instead of lunging straight for the goal.
Computer vision for one is investing effort into understanding how the human brain deciphers and classifies shapes. But my understanding is at the "Popular Science" level, not at the "Journal Of Neuroscience" level.
The problem with brain simulators is that people who write them are typically neuroscientists, who are not interested in building AI. They just want their simulation to produce similar neuron behaviors to what their sensors record. Their simulations are at such a low level that even if the results produced by the simulation are identical to the real signals, it's just as hard to understand what's going on on the higher level as when looking at the real signals.
Also, those low level simulations are so computationally intensive, that you can forget about simulating any interesting high-level behavior in a reasonable time frame. Moreover, many of the simulated low level details might not be relevant or necessary for intelligence.
People in AI would benefit from a good theory of how brain works, unfortunately, initiatives like Blue Brain Project have not produced such a theory (at least not yet).
I meant that as a part of the general argument put forward by Jeff Hawkins (cf. "On Intelligence"). Not my claim, to be precise. And it isn't exactly as ridiculous as it sounds either.
I completely agree that there is very little chance AI research will succeed any time soon, but not because AI researchers haven't listened enough to Jeff Hawkins. The way to understand human intelligence is to build computational models that reproduce some of its properties.
My statement isn't so much that we listen to Hawkins but rather that the point put forward by Hawkins (among others) in recent years is the point around which we should focus our efforts. You can't build a computational model if you have nothing that realistically reflects the requirements of the thing that you wish to model as instantiated in reality as we know it.
Thus, should we instead be building black-box systems that behave seemingly intelligently without understanding why they behave intelligently? I.e. neural networks, deep learning? Is this a good investment? It seems so, at least in the short term. But what about the long term? I don't know the answer.
Having said that, I've been working for some time on non-monotonic reasoning and related subjects, which are typical symbolic logic approaches to AI, and I believe that what I am doing, while intellectually very worthwhile, is a dead end in the context of AI. I'm sometimes surprised at how easily we get money to get to work on this stuff. So I agree with the point that lack of funding is not the problem.