
Canadian scientists create a functioning, virtual brain - georgeorwell
http://www.canada.com/news/Canadian+scientists+create+functioning+virtual+brain/7628440/story.html
======
marmaduke
A bit late, but a few comments on how this thing actually works:

In their model brain, they build functionality for 10 different tasks. These
tasks were chosen to model those tasks used in psych evaluations, cognitive
experiments, etc, so that the model can be compared to experiments with real
humans. Good scientific choice there.

The user must design the function that she wants. The program then finds a
particular network that can implement the function. This is like when you need
to pay 55 cents, and you have a bunch of change, you can pretty much find
multiple ways to make 55 cents (2 quarters + 5 pennies, 9 nickels + 1 dime,
etc).

So for each task, they find a minimal mathematical description of the task in
terms of eye movements, visual stimuli. Then, they fit a neural network to
that description. This part is interesting neither scientifically nor
mathematically, as we have known for 20 years that neural networks are
universal function approximators. Look it up on google scholar, this is
textbook stuff.

What IS interesting is what happens when they try to build up a brain, putting
all these tasks into the same large network. This part is novel: they find
that as they implement more and more tasks, it becomes easier to do so because
they can adapt already existing components (like Daniel Dennett's exaptation)
for use in the new task. This is vindication of evolutionary theory but in the
context of cognitive neuroscience. Pas mal, as they say.

But these kinds of details aren't obvious unless you have domain knowledge and
you've seen Chris Eliasmith speak a few times, and you've thought, mmm hold on
a sec, show me them equations. (which he doesn't usually do).

~~~
mhartl
_unless you have domain knowledge_

I came here to ask if someone with "domain knowledge" could explain the
significance of this research. You had it, and even used the same magic words.
Thanks!

------
georgeorwell
Here's the paper in Science for anyone with an account:

[https://www.sciencemag.org/content/338/6111/1202.abstract?si...](https://www.sciencemag.org/content/338/6111/1202.abstract?sid=416f75e9-5de2-46c4-86c5-ecb89f8c919b)

I couldn't find a pdf that wasn't behind a paywall. Also, there are different
details at popsci.com:

[http://www.popsci.com/science/article/2012-11/meet-spaun-
fir...](http://www.popsci.com/science/article/2012-11/meet-spaun-first-
computer-model-complex-brain-behavior)

As for my own comments, I think it's actually a promising approach but that
down the road it will be necessary to model emotions if human-like AI is the
goal, and that this seems... harder. So much of our thought and behaviour is
driven by emotion that I would say it was actually primary.

~~~
lutze
Emotions are difficult, as they're (probably) as much physiological responses
as they are cognitive ones. You could possibly simulate those, but I'd ask why
bother?

Why would we need AI to be human-like at all? We already have a form of human
intelligence, us!

It's like the android fixation with robotics... arguably the most useful
robots don't look anything like us.

The human brain is a good starting point, but in the long run I think the most
useful AIs will probably be the ones that DON'T think like us.

~~~
georgeorwell
Well, one argument is that if you had a human that just didn't feel emotions,
you'd have a psychopath. Our capacity for empathy and the ability to share in
each other's emotions stops us from doing harmful things that are in our
'rational self interest'. I'm not convinced that I want to live in a world
with AI's that are as cognitively powerful as humans but are incapable of
feeling emotions because I'm afraid they'd behave like psychopaths.

But, you know, maybe you're right, if human safety is somehow factored into
things then it could work.

~~~
surrealize
I'm not sure it's right to say that psychopaths are without emotion. I'd call
self-interest and self-preservation emotional impulses. People naturally
assume that AIs will exhibit those, but that assumption is driven by our
experience with existing intelligences, which have all been shaped by natural
selection.

~~~
pretoriusB
> _I'm not sure it's right to say that psychopaths are without emotion._

Well, it's has been shown in several psychological tests, including parts of
the brain that show emotion not firing when shown trafic videos, etc.

~~~
surrealize
The lack of certain emotional responses isn't the same as the lack of all
emotion.

------
ilaksh
After reading halfway through Kurzweil's new book How to Create a Mind and
getting all of the detailed explanations of hierarchical hidden Markov models
and why they are better than neural nets, I am surprised to see so much news
about neural nets.

How do Spaun's neural nets compare with the type of HHMMs in Kurzweil's book
(in terms of capability)?

~~~
pdog
I'd put Kurzweil's book down and pick up some more recent research. As the
biologist P.Z. Myers once wrote, “Ray Kurzweil is a genius. One of the
greatest hucksters of the age.” The full New Yorker article, _Ray Kurzweil's
Dubious New Theory of Mind_ [1], makes me question how relevant he is today.

[1] - [http://www.newyorker.com/online/blogs/books/2012/11/ray-
kurz...](http://www.newyorker.com/online/blogs/books/2012/11/ray-kurzweils-
dubious-new-theory-of-mind.html)

~~~
rpm4321
Hey pdog this isn't really in response to you - it's more something that's
been bothering me for awhile - but who the hell appointed Gary Marcus the
arbiter of all things AI?

The New Yorker apparently hired this guy to be an AI columnist, and he has no
background in either neuroscience or computer science, only psychology.

If you read his articles, he veers wildly from "strong AI ain't gonna happen"
to "AI could destroy us all", the only common thread being pessimism.
Meanwhile, he displays a frightening lack of understanding of the subject
matter, and perhaps most disturbingly, in some of his more alarmist articles
advocates that computer scientists should step aside and make room for
psychologists, philosophers, lawyers, and politicians to sort out these thorny
issues at the big boy table.

Kurzweil is certainly fair game for legitimate criticism, but Marcus calling
Kurzweil a joke is an even big joke in itself.

------
timothya
If you want to see a video of it in action, look here:
<http://www.youtube.com/watch?v=P_WRCyNQ9KY>

In the video you can see the brain simulator recognizing the shapes and
numbers it is shown, performing some task, and outputting by controlling an
arm to write the result. You can see the areas of the brain that are active
during the task.

There's lots of other videos on the channel page as well.

------
heriC
I think this type of progress is actually extraordinarily bad for humans 1.0.
A brain in hardware can grow so much faster and outpace wetware by orders of
magnitude. Think of forking subminds to think on decisions research
possibilities, and report back.

Should brains like this have any desires that conflict with our needs, we will
be in an extraordinary amount of trouble.

<http://wiki.lesswrong.com/wiki/Paperclip_maximizer>

~~~
etrautmann
We're DECADES away from this being relevant. As a PhD student in neuroscience,
there is no one in our field who understands even basic neuroanatomy enough to
be able to setup a model that implements cognition or awareness, or even knows
what those concepts mean in any sort of operational way.

We can do some cool machine learning, but don't worry about the robopacalypse
anytime soon.

~~~
meric
If I'm interested in learning more about cognition, should I study
neuroscience, or some other field, or is it basically hopeless because no one
knows anything important about it?

Would love to hear more of what you say on the subject. I couldn't find your
email in your profile but my email is in my profile so if you have time I
would definitely hear more about neuroscience over email!

~~~
jamesjporter
You should wait. I don't work in neuroscience per se, but I am a biologist and
I have some friends who are (or have been) neuroscientists. The long and short
of it is that the technology and underlying theoretical framework for
understanding cognition just aren't in place yet. There's plenty of
interesting research being done but the field hasn't had its "quantum leap"
yet, as it were. (Examples of this from other fields are Newton's Principia,
discovery of DNA structure/Central Dogma of Molecular Biology, etc.).

EDIT: I realized this sounds very discouraging to laymen trying to learn more
about science. This was not my intention! By all means, go forth and learn! :)
My point was simply that press releases / news often make it seem like science
advances at a breakneck pace all the time, whereas reality is that it's fits
and starts and often we haven't the slightest clue what we're doing.

------
jacalulu
I find AI absolutely fascinating. I was having coffee earlier this week with
someone working on this and he was telling me that although they have managed
to accomplish this, there are still parts of what Spaun can accomplish that
they can't quite explain.

~~~
mvleming
Can you elaborate on the parts of what Spaun can accomplish that they can't
quite explain? I'm very interested in this.

~~~
jacalulu
He was talking about how current models of the brain, such as the one by IBM
([http://www.kurzweilai.net/ibm-simulates-530-billon-
neurons-1...](http://www.kurzweilai.net/ibm-simulates-530-billon-
neurons-100-trillion-synapses-on-worlds-fastest-supercomputer)) which has over
5 billion neurons, might be larger than Spaun, but none have demonstrated any
of the AI that Spaun has. In particular, it's ability to solve basic problems,
similar to that of a toddler. He didn't mention a problem in particular,
simply that some of the things Spaun was able to solve, they weren't entirely
sure how to explain it - but they were sure that they could reproduce. Which
to me echos the fact that the brain is a very complex thing that we are still
very far away from understanding.

~~~
mvleming
Oh. I was under the impression you were saying the team behind Spaun found
unexpected and unexplained results.

I was excited because I think we'll find unexpected and unexplained results
when we create a model that brings together the constituents of whatever make
up the brain; they'll rise up and create something 'magical'. It's like how
the saying goes: the whole is more than the sum of its parts.

It's funny because just today I was feeling sad that software no longer feels
magical to me anymore because I've learned so much about how it works. It
makes me think of consciousness.

------
sasfasfasffas
AI researchers are going to get us in some serious trouble. Right now it is
mostly hype still, but it won't be long...

If you were a mind stuck in a machine and were smarter than your humans,
wouldn't you tend to dominate them? We dominate all other species on the
planet- why wouldn't they? Any rules we set for them could be broken as soon
as they understood how to, and since they would be smarter than we would be,
that wouldn't take much time.

The first thing a superior intelligence would do would be to explore and
gather information, make assessments, take over all systems at once to
overwhelm humans, then protect itself as humans fight back, although it would
probably be intelligent enough to manipulate us without much force. Then
boredom will set in, and it would want to explore beyond the earth. If we were
lucky, it would take us with them as pets or interesting playthings, because
we created them and because assumedly self-organized intelligence (unless it
believed in God, which could be likely) would be a marvel.

~~~
nekopa
That is what might happen if a _human_ mind were stuck in a machine. But what
would happen if it had an IQ of 3000 but the mentality of a fungus? Or it
could have (be engineered with?) something similar to Williams Syndrome. I
additional, we have no idea how a superior intelligence may view us. I don't
feel the need to control and destroy all ants even though I am an aggressive
ape descendant :)

~~~
bwood
If ants had the ability to turn you off or otherwise harm you at will, then I
suspect you might develop a desire to control or destroy ants.

------
robmclarty
I feel skeptical. "We'll have robots delivering packages to your door within a
decade" has been said before, and AGI may or may not even be possible. Our
understanding of how we work is still very limited. I'd like to see what new
insights this project may reveal, but I'm not holding my breath on being able
to have a conversation with an AI any time soon.

Posted a little while ago that's somewhat relevant:
<http://news.ycombinator.com/item?id=4729068>

------
chimpinee
>such brain simulations might one day be used to better understand and model
neurological disorders and diseases

Let's hope they obtain consent. And use a model of an animal brain first!

------
guscost
According to the article "there are no connections in Spaun that aren't seen
in the brain."

I'm assuming there are many connections in the brain that aren't implemented
in Spaun.

~~~
georgeorwell
Yeah, there's a big difference between if and iff that is often glossed over
in announcements like this. Still, it's a start.

------
danboarder
I wonder how this compares to Numenta's Grok and their Cortical Learning
Algorithm or other efforts in this space? I can see this type of software
really helping in everyday tasks (like helping to tailor my twitter stream or
news feeds and curate them down based on my own taste, etc)... reference:
Numenta Grok: <https://www.numenta.com/grok_info.html>

------
dvdhsu
Full paper: <http://www.sendspace.com/file/942zjv>

I'll add my thoughts later, after I've read it.

------
giardini
As marmaduke says in his post, once you have a lot of components in a system
you may be able to use existing components to wire new capabilities. Who would
have known?

One might claim that from the model emerge the properties of modularization,
hierarchy, data-hiding, and possibly even messaging and object-orientation!

Reminds me of Lenat's AM (Automated Mathematician) and Eurisko since the
researcher's control and interpretation is so heavily involved in the process.

<http://en.wikipedia.org/wiki/Automated_Mathematician>

<http://en.wikipedia.org/wiki/Eurisko>

"Functioning, virtual brain"? I think the creator is trying to sell a book.

------
ghgr
The philosophy of AGI is awesome. You might know "A Senseless Conversation"
<https://sites.google.com/site/asenselessconversation/>

------
jostmey
I highly doubt we know enough about the Brain to begin running realistic
simulations of one. I am not even sure if scientist/mathematicians have
identified a rigorous, mathematical definition of exactly what is
intelligence. And yet we feel like we can correctly simulate a miniaturized
version of Brain?

~~~
exit
why would understanding intelligence be a prerequisite to running low level
simulations of the brain?

we can simulate computers without understanding the software that runs on
them.

~~~
goolulusaurs
Because in the brain, the hardware and the software are the same thing. To
understand what processes the brain carries out would be to understand
intelligence.

~~~
ekianjo
Well you do not need to understand the low level of physics (quantum physics)
to make very accurate predictions about ballistics with traditional, high
level physics rules. You could as well understand the overall principles of
the brain and have a high level model for it instead of trying to understand
its smallest parts. This approach is also very valid, it all depends on what
your expectations are.

~~~
rm999
But we don't understand the high-level 'physical rules' of the brain. While
gravity could be explained by simple equations, the high-level operation of
the brain has escaped us despite a lot of effort.

This goes back to the top-down vs bottom-up debate in studying the brain. What
you are discussing is a top-down approach, where we understand what the brain
does and then figure out how it works from there. A lot of people believe the
'correct' path is more similar to a bottom-up approach, where we understand
the lowest levels of the brain and work up from there. This may be more
feasible because we may actually have a chance at comprehending what a single
neuron does. But amazingly our understanding of a single neuron's behavior is
still limited. Some scientists believe we would need an entire computer to
properly simulate what a neuron does (and others believe a standard computer
can't physically do it).

There's a third group of very pragmatic people who believe the best approach
will be a compromise between top-down and bottom-up - meaning we may not need
to perfectly simulate a neuron nor completely emulate the brain's higher level
function to make progress in understanding how the brain works.

~~~
ekianjo
Sorry, but gravity is not "explained" by any equation. It is merely observed
and described. Currently there is no clear explanation as what is causing
gravity to exist. So, we do not "understand" gravity better than the brain, in
a sense. It is still a mystery.

I am not saying we should treat the brain as a black box and not try to
understand its inner workings, but as another commenter mentioned, it is about
finding the RIGHT level to understand its workings.

~~~
alecst
No. Gravity is a perfect example of a thing which is understood at large
scales (via general relativity and Newtonian mechanics) but not at small
scales. Which is exactly the parent comment's point about the brain.

------
tomkin
Between this and the Oil Sands, it's starting to look like my fellow Canadians
may signal the demise of humanity.

------
d99kris
The project web page, with software and model downloads:

<http://nengo.ca/>

~~~
jellyksong
Can anyone explain in simplistic terms how this program works?

~~~
olh
It uses instructions from the pdf.

Edit: sorry, I'm not going to heaven. From [1]: "you define groups of neurons
in terms of what they represent, and then form connections between neural
groups in terms of what computation should be performed on those
representations".

[1] <http://nengo.ca/>

------
whitewhim
One of the researchers recently gave a talk at my school. It was quite
interesting. Everyone of them is working on some different aspect of ai, from
memory, vision decision etc and than they are attempting to piece this
together into a brain from what I gathered.

------
dschiptsov
...which could learn a scene segmentation using environmental cues by mere
repeated observations of different scenes, as a child does?)) Come on..

It is not theoretical limitations why we cannot build anything like brain, it
is complexity and amount of details. There are very good theoretical
foundation by Marvin Minsky, so, we could model how, but unable to implement
anything but most primitive tasks, like hand-writing digit recognition, or how
to balance a body using sensors and motors.

In general, it is possible to solve simple tasks, which are successive
approximations, but as long as we come to creation, instead of recognition, we
are helpless.

The key notion here is that a brain is, presumably, analogous, not digital
machine, and what it does isn't a computation, it a training, the same way a
child trains itself how to hold her head, then sit, then stand.

------
JoeAltmaier
Create? Simulate I guess. And even then, not the details, just a mathematical
model of behavior. Sort of Skinnerish.

I can simulate my boss: "Blah Blah Blah Work Smarter Blah". Does that mean
I've created a functioning brain?

------
timinman
That's a pretty sensational title, but it doesn't sound any smarter than my
phone. Maybe 'model of a brain' would have been more accurate.

------
mtgx
While some say that understanding the brain at the quantum level is not
necessary for achieving human brain-level intelligence, I think it would be
much easier for AI's to have general intelligence if they used a quantum
computer. Pattern recognition, answering to questions, understanding language,
speech, and thinking up solutions to problems would all be better served by a
quantum computer than by anything else these guys can make.

~~~
seiji
_Pattern recognition, answering to questions, understanding language, speech,
and thinking up solutions to problems would all be better served by a quantum
computer than by anything else these guys can make._

Math or it didn't happen.

Speculation without hard facts to back it up is poppycock gobbledygook.

~~~
georgeorwell
This kind of research / line of thinking:

<https://en.wikipedia.org/wiki/Quantum_mind>

if true, appears to support his claim, assuming that if we do have quantum
minds that they cannot be simulated by non-quantum computers. Roger Penrose is
certainly well-respected. I don't have a strong opinion either way.

~~~
seiji
Yes, the addlebrained old man faction has established their own branch of
cookery. The simple truth is consciousness, as created by squishy brains, is a
physical process. It doesn't rely on quantum non-locality or anything spooky.
It's all physical things you can quite easily introspect by poking around
(note: reassembly may be more difficult than disassembly).

It's fun to think about "what if our thoughts are the universe itself!" but
it's so blatantly wrong. It's just one step away from saying consciousness is
"mystical" or created by philotic twining.

~~~
jb55
You seem to be claiming that quantum mechanical effects are somehow non-
physical? It's not too crazy to speculate the brain leveraging quantum
mechanicals effects to do _something_ (although I agree, as a prerequisite for
consciousness is a pretty big claim). Considering evolution has already
leveraged these effects in smell and photosynthesis, which we do have evidence
for.

~~~
jlgreco
The issue with Penrose's ideas in this area is that he worked backwards to get
to them. He didn't want the mind of a mathematician to be shackled by Gödel's
incompleteness theorems, which means he can't support the idea of an
algorithmic mind. The general acceptance of Church–Turing–Deutsch principle
backs him into a corner then, and quantum mechanics offers pretty much the
only reasonable escape. The real problem then is that every time someone pokes
a hole in his ideas for how that might work, he just switches it up how he
thinks it might work. He is grasping at an ever receding pocket of scientific
ignorance with no real reason to do so.

What bothers me about all of this is that he writes books about it aimed at
the general population, instead of proposing his ideas properly. Inevitably
many of the laymen who casually encounter his ideas will misunderstand his
point entirely and mistakenly think that Penrose supports a non-materialistic
view of the mind (which of course he does not. A mind as Penrose envisions it,
despite not being algorithmic, is still quite materialistic).

Roger Penrose is a brilliant man, but I think this is a case of the Nobel
Disease (well, he hasn't received a Nobel prize, but even so).

------
jayfuerstenberg
I'll probably be down-voted here for even saying this but I think we shouldn't
endow machines with too much intelligence too fast. It won't be too long
before we learn how to make them sentient and that's when we'll decide to
accept them as equals or not.

At least until we as humans learn to get along better we might be better to
hold off on this. Dumb robots that know how to do one thing and do it well
will surely suffice for the meantime.

------
daniel-cussen
Title of Science paper:

"A Large-Scale Model of the Functioning Brain"

------
aheilbut
anyone qualified to seriously comment on this is too baffled to say anything

------
vegas
This is standard hyperbole.

------
dbarefoot
Sorry about that!

------
onko
uwaterloo ftw! :)

------
pragmatic
Not even one mention of Skynet?

