In their model brain, they build functionality for 10 different tasks. These tasks were chosen to model those tasks used in psych evaluations, cognitive experiments, etc, so that the model can be compared to experiments with real humans. Good scientific choice there.
The user must design the function that she wants. The program then finds a particular network that can implement the function. This is like when you need to pay 55 cents, and you have a bunch of change, you can pretty much find multiple ways to make 55 cents (2 quarters + 5 pennies, 9 nickels + 1 dime, etc).
So for each task, they find a minimal mathematical description of the task in terms of eye movements, visual stimuli. Then, they fit a neural network to that description. This part is interesting neither scientifically nor mathematically, as we have known for 20 years that neural networks are universal function approximators. Look it up on google scholar, this is textbook stuff.
What IS interesting is what happens when they try to build up a brain, putting all these tasks into the same large network. This part is novel: they find that as they implement more and more tasks, it becomes easier to do so because they can adapt already existing components (like Daniel Dennett's exaptation) for use in the new task. This is vindication of evolutionary theory but in the context of cognitive neuroscience. Pas mal, as they say.
But these kinds of details aren't obvious unless you have domain knowledge and you've seen Chris Eliasmith speak a few times, and you've thought, mmm hold on a sec, show me them equations. (which he doesn't usually do).
I came here to ask if someone with "domain knowledge" could explain the significance of this research. You had it, and even used the same magic words. Thanks!
I couldn't find a pdf that wasn't behind a paywall. Also, there are different details at popsci.com:
As for my own comments, I think it's actually a promising approach but that down the road it will be necessary to model emotions if human-like AI is the goal, and that this seems... harder. So much of our thought and behaviour is driven by emotion that I would say it was actually primary.
Here you go, I uploaded it to 2 different sites:
Why would we need AI to be human-like at all? We already have a form of human intelligence, us!
It's like the android fixation with robotics... arguably the most useful robots don't look anything like us.
The human brain is a good starting point, but in the long run I think the most useful AIs will probably be the ones that DON'T think like us.
But, you know, maybe you're right, if human safety is somehow factored into things then it could work.
With respect to an emotionless AI, the term "psychopath" is out of left field. The term "psychopath" is associated with danger because of certain dangerous human psychopaths. And those people had a whole set of motivations and impulses that wouldn't exist in an emotionless AI.
If you want to argue that an emotionless AI would be dangerous, go ahead, but the term "psychopath" is a poor fit to that case. It brings in too much extra baggage.
Well, it's has been shown in several psychological tests, including parts of the brain that show emotion not firing when shown trafic videos, etc.
Most of them are simple cognitive responses. For instance, if you see a cute puppy you have at least to smile. At the other end of the spectrum we have the intricate web of woman's emotions. Hard to explan, and almost impossible to even grasp. As no one seems to understand this problem well we could leave it out of the specs for now.
I've been thinking about this lately and my working theory is that emotions are a fundamental driver of learning, because they give us pleasure when we accomplish something.
A CPU can complete billions of operations per second but doesn't care which ones. Because it experiences no pleasure from doing something useful, it is not at all self-directed; we must tell it exactly what to do. There is nothing to guide it through the search space of all possible things it could be doing.
My theory is that a very basic part of artificial intelligence will be missing until a machine exhibits some kind of emotion.
The "emotional" reward system is just an evolutionary mechanism to make sure we eat and have sex. The downside is that it is primitive and open to exploitation. Obviously I'm oversimplifying, but why would an AI try to learn things when it could be emotionally satisfied by playing games, socializing, or taking some sort of drug?
If you felt like a badass after doing something worthless, like adding a million random numbers, your mental facilities would go to waste. You would spend all of your time and mental energy doing things that don't matter. Our minds are capable of doing an infinite number of useless things, just like a CPU without a program. Only because we get a sense of satisfaction from doing interesting things can we be productive. We're searching an infinite graph for "interesting" nodes, where "interesting" is determined by how good it feels to get there.
As for simulating emotion, there's a bit of evidence that such abilities may have therapeutic applications:
If we were going that route, I'd rather an AI actually feel empathy, rather than just being able to look like they do... Because faking it is pretty much textbook psycho!
Edit: Just to clarify, when I say AI, I'm referring to strong AI, not anything that we have now (or are even close to getting).
Is there, really?
Unlike intentionality, however, emotions aren't just for show: I act differently, depending on my emotions. For example, when I'm angry, I sometimes act irrationally. My actions, therefore, depend on my emotions. To program emotions (anger), and their effects (irrationality) is difficult (how do you program irrationality?).
For example, the robot your GP linked to is different from other robots because it has "obvious emotional expressions". These emotional expressions probably don't translate in to different behaviors; when the robot has an angry expression, it's not going to hit the child it's playing with, it's just going to display an angry face.
So, yes, there is a difference between ersatz emotions and real emotions. That's why the Chinese Room argument isn't quite a parallel. For the Chinese Room, it's hard to tell whether or not the computer has intentionality just by speaking to it; for an emotional robot which acts, it's easy to tell whether or not it has actual emotions.
Speaking of rationality, it's probably safe to say that you will end with a sometimes irrational program no matter what you do. Neural networks have glitches, configuration spaces have local minimae, software has bugs. It's as if you saw AI as being "programmed" via a series of rules, when it's more a grow process, arising from probabilistic dumb agents. Even if these agents were to be rational, the system could become irrational: I refer to the current economic crisis.
ASU students, for example, can access it here for free:
How do Spaun's neural nets compare with the type of HHMMs in Kurzweil's book (in terms of capability)?
 - http://www.newyorker.com/online/blogs/books/2012/11/ray-kurz...
The New Yorker apparently hired this guy to be an AI columnist, and he has no background in either neuroscience or computer science, only psychology.
If you read his articles, he veers wildly from "strong AI ain't gonna happen" to "AI could destroy us all", the only common thread being pessimism. Meanwhile, he displays a frightening lack of understanding of the subject matter, and perhaps most disturbingly, in some of his more alarmist articles advocates that computer scientists should step aside and make room for psychologists, philosophers, lawyers, and politicians to sort out these thorny issues at the big boy table.
Kurzweil is certainly fair game for legitimate criticism, but Marcus calling Kurzweil a joke is an even big joke in itself.
My guess is that maybe some of the more biologically accurate spiking neural network simulations are more capable than the more typical neural nets that Kurzweil dismisses, but also less efficient than hierarchical hidden Markov models.
When I Google hierarchical hidden Markov models I see it being used in quite a lot of current research. I also see neural nets being mentioned, sometimes in the same project.
What research would you recommend?
First, note that most of the stuff you read on HN about NNs is about practical applications. Resent research in NNs has lead to good results on hard problems, such as object and speech recognition. Nobody is claiming that these types of neural works are actually a good (low-level) model for how the brain works, they just give empirically good results on some tasks.
If you aim is making a model for how the brain works, note that all models are just that: models. Different models can be good at modeling different aspects of how the brain works.
In the video you can see the brain simulator recognizing the shapes and numbers it is shown, performing some task, and outputting by controlling an arm to write the result. You can see the areas of the brain that are active during the task.
There's lots of other videos on the channel page as well.
Should brains like this have any desires that conflict with our needs, we will be in an extraordinary amount of trouble.
We can do some cool machine learning, but don't worry about the robopacalypse anytime soon.
As for "DECADES" -- that is a pretty short time, when you have a very large and important research programme ("Friendly AI", some call it) to carry out. If we postpone this research some decades, and then someone makes a breakthrough in AI without ensuring Friendliness, it could be bad news.
Would love to hear more of what you say on the subject. I couldn't find your email in your profile but my email is in my profile so if you have time I would definitely hear more about neuroscience over email!
EDIT: I realized this sounds very discouraging to laymen trying to learn more about science. This was not my intention! By all means, go forth and learn! :) My point was simply that press releases / news often make it seem like science advances at a breakneck pace all the time, whereas reality is that it's fits and starts and often we haven't the slightest clue what we're doing.
Only if we equip such entities with the capacity to act on those desires.
It's an interesting thought experiment but it's a bit ridiculous that the "tests" are mentioned on the page when they don't have any relevance to the rest of the idea.
I was excited because I think we'll find unexpected and unexplained results when we create a model that brings together the constituents of whatever make up the brain; they'll rise up and create something 'magical'. It's like how the saying goes: the whole is more than the sum of its parts.
It's funny because just today I was feeling sad that software no longer feels magical to me anymore because I've learned so much about how it works. It makes me think of consciousness.
If you were a mind stuck in a machine and were smarter than your humans, wouldn't you tend to dominate them? We dominate all other species on the planet- why wouldn't they? Any rules we set for them could be broken as soon as they understood how to, and since they would be smarter than we would be, that wouldn't take much time.
The first thing a superior intelligence would do would be to explore and gather information, make assessments, take over all systems at once to overwhelm humans, then protect itself as humans fight back, although it would probably be intelligent enough to manipulate us without much force. Then boredom will set in, and it would want to explore beyond the earth. If we were lucky, it would take us with them as pets or interesting playthings, because we created them and because assumedly self-organized intelligence (unless it believed in God, which could be likely) would be a marvel.
Posted a little while ago that's somewhat relevant: http://news.ycombinator.com/item?id=4729068
Let's hope they obtain consent. And use a model of an animal brain first!
I'm assuming there are many connections in the brain that aren't implemented in Spaun.
I'll add my thoughts later, after I've read it.
One might claim that from the model emerge the properties of modularization, hierarchy, data-hiding, and possibly even messaging and object-orientation!
Reminds me of Lenat's AM (Automated Mathematician) and Eurisko since the researcher's control and interpretation is so heavily involved in the process.
"Functioning, virtual brain"? I think the creator is trying to sell a book.
we can simulate computers without understanding the software that runs on them.
This goes back to the top-down vs bottom-up debate in studying the brain. What you are discussing is a top-down approach, where we understand what the brain does and then figure out how it works from there. A lot of people believe the 'correct' path is more similar to a bottom-up approach, where we understand the lowest levels of the brain and work up from there. This may be more feasible because we may actually have a chance at comprehending what a single neuron does. But amazingly our understanding of a single neuron's behavior is still limited. Some scientists believe we would need an entire computer to properly simulate what a neuron does (and others believe a standard computer can't physically do it).
There's a third group of very pragmatic people who believe the best approach will be a compromise between top-down and bottom-up - meaning we may not need to perfectly simulate a neuron nor completely emulate the brain's higher level function to make progress in understanding how the brain works.
I am not saying we should treat the brain as a black box and not try to understand its inner workings, but as another commenter mentioned, it is about finding the RIGHT level to understand its workings.
i don't see why this high level understand would be a prerequisite to running a simulation of the brain.
by the way, hardware and software is always the same thing. my simulation of a laptop will include a pattern of high and low magnetic charges on its simulated hard disc, without my understanding the software which those patterns ultimately actualise.
The article claims that the simulation behaves in many ways like a real brain, so even if they made some mistakes along the way, it is still incredibly fascinating.
Edit: sorry, I'm not going to heaven. From : "you define groups of neurons in terms of what they represent, and then form connections between neural groups in terms of what computation should be performed on those representations".
It is not theoretical limitations why we cannot build anything like brain, it is complexity and amount of details. There are very good theoretical foundation by Marvin Minsky, so, we could model how, but unable to implement anything but most primitive tasks, like hand-writing digit recognition, or how to balance a body using sensors and motors.
In general, it is possible to solve simple tasks, which are successive approximations, but as long as we come to creation, instead of recognition, we are helpless.
The key notion here is that a brain is, presumably, analogous, not digital machine, and what it does isn't a computation, it a training, the same way a child trains itself how to hold her head, then sit, then stand.
I can simulate my boss: "Blah Blah Blah Work Smarter Blah". Does that mean I've created a functioning brain?
Math or it didn't happen.
Speculation without hard facts to back it up is poppycock gobbledygook.
if true, appears to support his claim, assuming that if we do have quantum minds that they cannot be simulated by non-quantum computers. Roger Penrose is certainly well-respected. I don't have a strong opinion either way.
It's fun to think about "what if our thoughts are the universe itself!" but it's so blatantly wrong. It's just one step away from saying consciousness is "mystical" or created by philotic twining.
It's also possible that consciousness is better understood as a computational process which is implemented on a VM instantiated by a brain. In that case looking too hard at the squishy stuff is not directly relevant and may be distracting, since it's a very complex way to implement a computer. This is logically possible and is of historic significance in AI.
My view is that the everyday concept of consciousness is mistaken, and that no such thing will be found. Our common sense view of perception and cognition has been found over and over to be completely wrong, so we shouldn't expect consciousness to turn up anywhere just because it feels like it ought too.
Functional approaches to brain modelling neatly avoid this issue by just building away and not worrying about it.
TLDR: physical doesn't mean explicable, and consciousness won't be explained by a physical process since it doesn't exist.
What bothers me about all of this is that he writes books about it aimed at the general population, instead of proposing his ideas properly. Inevitably many of the laymen who casually encounter his ideas will misunderstand his point entirely and mistakenly think that Penrose supports a non-materialistic view of the mind (which of course he does not. A mind as Penrose envisions it, despite not being algorithmic, is still quite materialistic).
Roger Penrose is a brilliant man, but I think this is a case of the Nobel Disease (well, he hasn't received a Nobel prize, but even so).
Everything will -- somehow -- be better. Just add quantum. (protip: go read scott aaronson's relevant articles, papers, monographs, course materials, etc).
I don't claim the effects aren't present. I do claim they are unnecessary for the emergence of human level consciousness and the reproduction thereof in silicon.
What makes a quantum computer better at AI?
The thing is, even Penrose isn't proposing anything mystical being done by quantum mechanics. He doesn't like the idea of an algorithmic mind, so he invokes quantum phenomenon to make the mind technically no longer algorithmic. In the real world though, nothing prevents you from doing similar with a desktop computer. You an rig together a (crappy) RNG for your x86 desktop with a serial port and a smoke detector. Penrose isn't trying to suggest that intelligence could not be replicated in man made machines, but rather suggest that the human mind is not restricted as we know purely algorithmic systems to be.
At least until we as humans learn to get along better we might be better to hold off on this. Dumb robots that know how to do one thing and do it well will surely suffice for the meantime.
"A Large-Scale Model of the Functioning Brain"