
New $1.6B supercomputer project will attempt to simulate the human brain - Frisette
http://io9.com/5980117/new-16-billion-supercomputer-project-will-attempt-to-simulate-the-human-brain
======
kanzure
This video/lecture is rather long, but it's an excellent introduction to what
the Blue Brain Project is working on:

[http://www.youtube.com/watch?v=9gFI7o69VJM&list=PLgO7JBj...](http://www.youtube.com/watch?v=9gFI7o69VJM&list=PLgO7JBj821uEq-
iLteI2BgeXc8JY1PgF2#t=9m)

That was back when Henry Markram was still collaborating with IBM. It explores
some of the intricacies of reverse engineering neurons and the human brain.

Another good source of information is the Whole Brain Emulation Roadmap:

[http://diyhpl.us/~bryan/papers2/brain-emulation-roadmap-
repo...](http://diyhpl.us/~bryan/papers2/brain-emulation-roadmap-report.pdf)

This was the funding proposal that they sent for the Human Brain Project:

<http://diyhpl.us/~bryan/papers2/neuro/HBP_flagship.pdf>

------
tgflynn
I would feel less skeptical about the prospects for simulating the human brain
if I had heard that someone had succeeded in simulating an infinitely simpler
nervous system, say that of a spider, for instance, and obtained spider-like
behavior from it.

~~~
kanzure
Not a spider, but how about C. elegans?

<http://openworm.org/>

<https://github.com/openworm>

<http://nemaload.davidad.org/>

<http://diyhpl.us/~bryan/papers2/neuro/nematodeuploadproject/>

~~~
streptomycin
Those don't really work, though:
[http://lesswrong.com/lw/88g/whole_brain_emulation_looking_at...](http://lesswrong.com/lw/88g/whole_brain_emulation_looking_at_progress_on_c/)

From the Human Brain Project
<http://www.humanbrainproject.eu/neuroscience.html>

Why not begin with simple organisms like C.elegans?

There are two problems here. The first is feasibility; the second is the
relevance of our results. Feasibility. Neuroscientists have mapped all of
C.elegans’ 300 or so neurons. However, enormous amounts of key data needed are
still missing. For instance we do not have enough data on the physiology and
pharmacology of C. Elegans neurons and synapses. And we still have limited
data on the distribution of ion channels, receptors and other proteins on
neurons, synapses and glia. Without this data we cannot build unifying models.
A second problem is how easy it is to obtain the data. The crucial requirement
for unifying models is the ability to access the data needed. Obtaining a deep
understanding of the molecular machinery of a single neuron or a single
synapse is just as difficult in C. Elegans as in human beings. And many
datasets – particularly data on cognition - are actually easier to acquire in
rodents, or even in humans. So we can’t just say: “let’s do this quickly in
worms and do complex brains later”: we have to solve the same basic
challenges, whatever brain we model. What we are actually doing is building a
generic strategy we can use to reconstruct any brain. Relevance: Studying the
“simple” nervous systems of organisms like C.elegans or drosophila, is
obviously very important, particularly for molecular and genetic studies.
However the organization, electrophysiology and function of the mammalian
brain are quite different. One of the HBP’s most important goals is to
contribute to the development of new treatments for brain disease. But
pharmaceutical companies already have great difficulties in translating
results from mouse to human beings; with simpler organisms these problems
become much worse. If we want to make a real contribution to clinical
research, it is probably unwise to invest heavily in simple systems, so
distant from the human brain.

~~~
tgflynn
It sounds like they're saying 300 neurons is too hard so let's do 10^11
instead.

Science usually proceeds somewhat incrementally from easier problems to harder
problems and I'm not seeing the foundation here.

Weren't simple genomes sequenced long before it was proposed to sequence the
human genome ?

~~~
neuroguy
The argument is partially like that, but the more complete version is: 1) We
know a lot about brain cognition through psychology, AI etc 2) We know a lot
about how people process from all the years of expermentation. 3) The big
things we don't know, is exactly what the role of the cell is, or how it
processes. For example, is it they synchronous firing of cells that allows us
to experience? What is actually going on in all those billions of cells etc.
4) We know a lot about the electrical activity in the brain, some about the
molecular and chemical processes etc, but what we don't have is an overarching
way of putting all this together, so we know what we are looking at. 5) We
don't know what C. Elgans think about or what causes them to do things, so we
will have a harder time understanding what exactly makes them tick.

I apologize if there is repetition, unclear lines, or bad reasoning, I am in
the middle of running some brain simulations, and had a minute while it ran.

~~~
varjag
On the other hand, activity patterns of C. Elegans neurons can be
comprehensively studied, it's behavior is well studied in how it responds to
simple stimuli. Simulating it has all the promising properties of good old
scientific method.

It is debatable whether building a $1.6B catatonic brain will advance
neuroscience more than a comprehensive, experimentally matched simulation of a
simpler system first.

~~~
neuroguy
I don't think there is much debate about it. I think it is fairly obvious that
this project will advance neuroscience quite a bit, even if it would be a
smarter choice to start with the C. Elgans. A project like this, putting
together what we currently know from experimentation is the necessary next
step. Will it bring the massive game changing understanding of the brain that
it is hoped, who knows.

------
davidmr
This is fascinating. In a previous life, my job was building/running supers at
a DOE lab, and while the work done on the machines were quite interesting, the
really grandiose projects were all physics related (supernova simulation,
liquid salt reactor cooling, jet combustion, etc.)

The biological projects were still cool and I'm certain they were important,
but they were harder to relate to (ion channel simulation, behavior of water
in confined spaces, etc.), mostly biochemistry.

This could provide a nice bit of press to the large scale HPC community, which
sometimes suffers from its association with DoD and the "other, darker side"
of the DOE. (Or other non-US equivalents.)

------
ComputerGuru
Correct me if I'm wrong, but it doesn't matter how complex the simulation is
if there is simply not enough underlying data? As far as I understand, we
simply do not understand neuron activity at a low level enough to make such a
thing feasible.

~~~
kanzure

      > As far as I understand, we simply do not understand neuron activity at 
      > a low level enough to make such a thing feasible.
    

Henry Markram has always been adamant that there are enough details already
discovered and put into the neurophysiology journals. Of course, now you have
to fish that data out and into some usable format.

<http://channelpedia.epfl.ch/ionchannels>

<http://www.neuroml.org/>

~~~
adrianhoward
_Henry Markram has always been adamant that there are enough details already
discovered and put into the neurophysiology journals. Of course, now you have
to fish that data out and into some usable format._

I would tend to disagree. For example as recently as ten years ago _everybody_
knew that most of the computation was implicit in the neural connectivity of
the synapses. We now know that there is significant computation within
individual neurons - in the dendrites of all things (previously thought to be
pretty much passive carriers of output from other cells - just wires
basically).

(See
[http://www.annualreviews.org/doi/abs/10.1146/annurev.neuro.2...](http://www.annualreviews.org/doi/abs/10.1146/annurev.neuro.28.061604.135703)
for example).

Go look at the neurophysiology journals - people don't seem having problems
finding new things to talk about ;-)

~~~
kanzure
I think it's a big jump to say that what I said is the same as saying (as you
put your interpretation of my comment) "there are no new things to talk
about". Besides, if you were to just assume that there's no details available,
then you will never aggregate them. But the reality is that there is a great
deal of detail in the literature, which can be used to build (even incomplete)
models. You could even look at the model to help inform the toiling grad
students which neurons to poke at more.

Also, thanks for the paper. do you have others.

------
10dpd
This is truly exciting - even if the project does not achieve the final goal
of 'simulating' the human brain, the discoveries along the way will be
invaluable.

------
mbq
I have a feeling that this is like an attempt to simulate a SOC computer on a
quantum physics level based on a blurry microscope image of it in hope to play
minesweeper on that. Basic principles (neuron behaviour) are hell of a complex
and we must do a lot of dangerous simplifications, we know nothing about the
firmware (pre-wiring given by the evolution of our species), finally the
software will be completely missing (in humans it is created by few years of
supervised training and self-improvement).

Why not to try from the top?

~~~
kanzure
> we know nothing about the firmware (pre-wiring given by ...

Huh? That's what they are studying.

~~~
mbq
This would require dissecting newborns, so I doubt it. Anyway my point was
that it is better to approach strong AI by improving and connecting silicon-
based ML methods than to put huge amount of power and effort into imperfect
neural abstraction hoping that it will automatically become concious.

~~~
kanzure
> imperfect neural abstraction hoping that it will automatically become
> concious.

Consciousness is not the goal. What if your idea of consciousness is wrong?
Even Wikipedia admits that nobody knows what consciousness is.

~~~
mbq
Ok, bad wording. s/become concious/acquire some higher functions of human
brain/g

------
JoeAltmaier
I get it, its interesting to understand our intelligence. But model a human
brain? Why? Its pretty cheap to make them (and fun). They don't work all that
well in the end - they're all bound up with food and sex and fear.

Why not be even a little ambitious, and make something an order smarter? The
human brain size is arbitrarily bound by the female pelvis size. Presumable
the artifical one won't have that limit.

And separate it from concerns of survival, paranoia, love etc. Maybe it could
think straight, solve a few problems for us.

~~~
adrianhoward
_I get it, its interesting to understand our intelligence. But model a human
brain? Why? Its pretty cheap to make them (and fun). They don't work all that
well in the end - they're all bound up with food and sex and fear._

Because we still don't understand the human brain in many, many, many ways.
Building a model of it and comparing that model to reality is going to help us
better understand the human brain. That, in turn, will likely help us
understand things like dementia, mental illness, affects of drugs, etc.

That's one reason.

There are others ;-)

~~~
JoeAltmaier
Its really, really complicated too. Much of it is incidental to our
evolutionary path. Simulating that is like, well, simulating all the cruft
stuck to a piece of toast once you dropped it on the floor? Figure out
anything at all about a human brain, like thinking for instance, you can skip
the cruft and still have a monumental achievement.

In fact, the temptation to do that will be enormous. That's why the whole
project seems fishy to me. Obviously simulating a subset is easier than the
whole; any subset is pretty much a huge win; why are they talking about
simulating it all? Sounds pie-in-the-sky.

------
DigitalSea
A Terminator Skynet scenario is currently playing out in my head. I wonder how
long after this human brain simulation runs it takes for it to become self
aware and starts attacking us? In all seriousness, if this works this could
potentially unlock so many mysteries and help solve & understand a plethora of
medical issues like mental illness. We are living in extraordinary times.

~~~
AndrewKemendo
I wish this meme would die. It really just taints everything we are trying to
do with Strong AI.

~~~
scaphandre
Do you have any proof that Strong AI would be safe for humans?

~~~
Jach
The default outcome for AGI is likely bad
(<http://singularity.org/files/ComplexValues.pdf> and
<http://singularity.org/files/SaME.pdf>), but not in the ridiculous way that
movies portray it, and no one should be using the movies as a substitute for
thought
([http://lesswrong.com/lw/k9/the_logical_fallacy_of_generaliza...](http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/)).
There are so many types of AI minds, and so many ways for it to go wrong
(<http://wiki.lesswrong.com/wiki/Paperclip_maximizer>) that picking out a very
particular fictional one to stress over, make movies about, and attempt to
constrain (even framing it as a constraint problem of some more "primitive"
desire to destroy humanity) as the only default mind, is pretty silly.

------
askimto
Never trust a neuroscientist.

I doubt we will see an adequate simulation of a single human cell in our
lifetimes much less a brain.

~~~
duaneb
Well, what do you mean by adequate? I doubt we'll see anything close to a
physically accurate (i.e. quantum lattice simulation) representation of a
cell, but we already have models that are good enough to guide research before
investing on (often costly) trials with actual cells. Furthermore, with things
like these, I'm much less interested in creating a human brain than I am in
creating something conscious on a human level, if that is even possible.
Consciousness does not imply anything to do with accuracy to the human brain.

~~~
askimto
I'd be thrilled if we had a simulation that we trusted so much that we could
consider skipping trials on actual cells.

------
troymc
If free will exists, how will it enter into the simulation (or emulation)?

~~~
D_Alex
Great question, and one which I would like to try and answer at length, but
cannot spare the time...

Short answer: There are many definitions of "free will", and many arguments
about the subtelties of the meanings of these definitions. AFAICT, a good
argument can be made that making use of some random phenomenon in reaching
decisions implements "free will" for the vast majority of these definitions
(note that "random" does not need to mean "arbitrarily random"). This answer
is not deeply satisfying to me, but a full elaboration of issues would need
several pages of text.

------
martinced
_"...an international group of researchers has secured $1.6 billion..."_

An international group of researchers has _scammed_ $1.6 billion away. And
that's with EU taxpayers money.

Do you really think that _anyone_ involved in deciding to allow that kind of
spending has any idea as to where we're at nowadays regarding AI? Did any of
them read _"On Intelligence"_ (its author knowing more than a thing or two
about AI)?

I'm sure not. And I'm not happy my taxpayers dollars are funding this.

I'm all for research and fundings going to research.

But this one is going to be a gigantic waste not leading to anything. And in
ten years people will apologize and explain why "x is not AI", "y is not AI"
and why it was a gigantic waste.

On a positive sidenote $1.6 bn for the duration of this project is peanuts
compared to the yearly $140 bn the UE is spending ; )

~~~
aheilbut
There's more information about the project here:
<http://www.humanbrainproject.eu/>
<http://www.humanbrainproject.eu/files/HBP_flagship.pdf>

The project is not actually as absurd as it sounds -- the funding is over 10
years, and it's funding a large number of different research programs in
molecular neuroscience (8%), cognitive neuroscience (12%), theoretical
neuroscience (6%), neuroinformatics (7%), medical informatics (6%), brain
simulation (10%), HPC (18%), neuromorphic computing (14%), robotics (11%), and
society/ethics (2%). About half of the budget is going to personnel /
students, and the research is being done by a large consortium of established
PIs.

Basically, they're creating a "European Institute of Neuroscience". IMHO, the
way that it's been branded as as a giant "brain simulation" makes it look a
little silly to other scientists -- but on the other hand it seems to have
worked pretty well with the politicians.

~~~
cdcarter
It's beyond exciting that over a whole percent of the money is going to an
ethics line.

~~~
varjag
Well EU needs to keep humanities grads employed somehow.

But seriously, this project has all the hallmarks of becoming the next
Nanotech. As in overly broad, loosely defined, overfunded and with little
practical output for the money.

------
joelbm24
something tells me they are going to call it skynet

------
felipesabino
still no skynet comments?

------
Nelsonned
Hopefully they don't put Windows on that computer..

------
pgambling
I for one welcome our new supercomputer overlords.

------
olliesaunders
I think this is a bad idea on so many levels. But all those aside, just think
of all the other research that’s not getting funded because of these assholes.

