Hacker News new | past | comments | ask | show | jobs | submit login
New $1.6B supercomputer project will attempt to simulate the human brain (io9.com)
73 points by Frisette on Feb 4, 2013 | hide | past | favorite | 74 comments



This video/lecture is rather long, but it's an excellent introduction to what the Blue Brain Project is working on:

http://www.youtube.com/watch?v=9gFI7o69VJM&list=PLgO7JBj...

That was back when Henry Markram was still collaborating with IBM. It explores some of the intricacies of reverse engineering neurons and the human brain.

Another good source of information is the Whole Brain Emulation Roadmap:

http://diyhpl.us/~bryan/papers2/brain-emulation-roadmap-repo...

This was the funding proposal that they sent for the Human Brain Project:

http://diyhpl.us/~bryan/papers2/neuro/HBP_flagship.pdf


I would feel less skeptical about the prospects for simulating the human brain if I had heard that someone had succeeded in simulating an infinitely simpler nervous system, say that of a spider, for instance, and obtained spider-like behavior from it.



Those don't really work, though: http://lesswrong.com/lw/88g/whole_brain_emulation_looking_at...

From the Human Brain Project http://www.humanbrainproject.eu/neuroscience.html

Why not begin with simple organisms like C.elegans?

There are two problems here. The first is feasibility; the second is the relevance of our results. Feasibility. Neuroscientists have mapped all of C.elegans’ 300 or so neurons. However, enormous amounts of key data needed are still missing. For instance we do not have enough data on the physiology and pharmacology of C. Elegans neurons and synapses. And we still have limited data on the distribution of ion channels, receptors and other proteins on neurons, synapses and glia. Without this data we cannot build unifying models. A second problem is how easy it is to obtain the data. The crucial requirement for unifying models is the ability to access the data needed. Obtaining a deep understanding of the molecular machinery of a single neuron or a single synapse is just as difficult in C. Elegans as in human beings. And many datasets – particularly data on cognition - are actually easier to acquire in rodents, or even in humans. So we can’t just say: “let’s do this quickly in worms and do complex brains later”: we have to solve the same basic challenges, whatever brain we model. What we are actually doing is building a generic strategy we can use to reconstruct any brain. Relevance: Studying the “simple” nervous systems of organisms like C.elegans or drosophila, is obviously very important, particularly for molecular and genetic studies. However the organization, electrophysiology and function of the mammalian brain are quite different. One of the HBP’s most important goals is to contribute to the development of new treatments for brain disease. But pharmaceutical companies already have great difficulties in translating results from mouse to human beings; with simpler organisms these problems become much worse. If we want to make a real contribution to clinical research, it is probably unwise to invest heavily in simple systems, so distant from the human brain.


It sounds like they're saying 300 neurons is too hard so let's do 10^11 instead.

Science usually proceeds somewhat incrementally from easier problems to harder problems and I'm not seeing the foundation here.

Weren't simple genomes sequenced long before it was proposed to sequence the human genome ?


The argument is partially like that, but the more complete version is: 1) We know a lot about brain cognition through psychology, AI etc 2) We know a lot about how people process from all the years of expermentation. 3) The big things we don't know, is exactly what the role of the cell is, or how it processes. For example, is it they synchronous firing of cells that allows us to experience? What is actually going on in all those billions of cells etc. 4) We know a lot about the electrical activity in the brain, some about the molecular and chemical processes etc, but what we don't have is an overarching way of putting all this together, so we know what we are looking at. 5) We don't know what C. Elgans think about or what causes them to do things, so we will have a harder time understanding what exactly makes them tick.

I apologize if there is repetition, unclear lines, or bad reasoning, I am in the middle of running some brain simulations, and had a minute while it ran.


On the other hand, activity patterns of C. Elegans neurons can be comprehensively studied, it's behavior is well studied in how it responds to simple stimuli. Simulating it has all the promising properties of good old scientific method.

It is debatable whether building a $1.6B catatonic brain will advance neuroscience more than a comprehensive, experimentally matched simulation of a simpler system first.


I don't think there is much debate about it. I think it is fairly obvious that this project will advance neuroscience quite a bit, even if it would be a smarter choice to start with the C. Elgans. A project like this, putting together what we currently know from experimentation is the necessary next step. Will it bring the massive game changing understanding of the brain that it is hoped, who knows.


I never claimed that the physiology of C. elegans should be preferred over that of the human brain.


No offense intended, I'm just trying to contribute to the discussion.


Precisely. Can we even see a complete simulation of E. Coli?

It's disheartening how much of science news and research today is all about hype, smoke, and mirrors. It's becoming more about catching fleeting fame and money grabbing than actually producing interesting advances and results.

Exactly how would they scale this when you have something like a hundred trillion synapses between the neurons in a brain? Mind you this is falsely assuming that there is nothing of worth to simulate within individual neurons/synapses. Our current technological infrastructure isn't even in the ballpark of being good enough to deal with actually LARGE graph data structures, and people are getting excited about this nonsense. They talk up these simulations, but we don't even know the basic details about these things yet.

Let's see a complete simulation of a spider's brain from the bottom up before talking about simulating the human brain. Let's figure out precisely what is going on in the brain of a spider. I will be surprised if we accomplish that in the next 50 years.


  > Precisely. Can we even see a complete simulation of
  > E. Coli?
Kind of: http://www.cell.com/abstract/S0092-8674(12)00776-3


10 years ago people simulated a nematode, a type of worm, and got nematode like behavior from it.


Do you have a reference ?

Do these worms exhibit non-trivial behavior or is it at the level of say reflex attraction to light sources ?


Sort of. There are only 302 neurons in a nematode, but knowing what weights to put on each of them is an issue.

PS: Not mine but this looks like a good summery. http://www.jefftk.com/news/2011-11-02


Actually if you read that link carefully - complete (or even complex) nematode behavior has never been simulated. Different teams have simulated different traits of behavior such as reaction to pokes on the head, non-spontaneous sinusoidal locomotion (there are different kinds of nematode locomotion triggered by the viscosity of the environment). Nothing resembling a learning nematode, indistinguishable from the real thing in terms of behavior, has ever been achieved (unfortunately).


The Human Brain Project's ultimate goal is to simulate human brain but its decade long project with 80 institutes and hundreds of scientist involved. I am sure simulating simpler brain systems will be part of the agenda before they move to much more complex systems.


if i recall correctly, blue brain, also ran by Markram, has successfully emulated a rat cortex in some capacity.


This is fascinating. In a previous life, my job was building/running supers at a DOE lab, and while the work done on the machines were quite interesting, the really grandiose projects were all physics related (supernova simulation, liquid salt reactor cooling, jet combustion, etc.)

The biological projects were still cool and I'm certain they were important, but they were harder to relate to (ion channel simulation, behavior of water in confined spaces, etc.), mostly biochemistry.

This could provide a nice bit of press to the large scale HPC community, which sometimes suffers from its association with DoD and the "other, darker side" of the DOE. (Or other non-US equivalents.)


Correct me if I'm wrong, but it doesn't matter how complex the simulation is if there is simply not enough underlying data? As far as I understand, we simply do not understand neuron activity at a low level enough to make such a thing feasible.


  > As far as I understand, we simply do not understand neuron activity at 
  > a low level enough to make such a thing feasible.
Henry Markram has always been adamant that there are enough details already discovered and put into the neurophysiology journals. Of course, now you have to fish that data out and into some usable format.

http://channelpedia.epfl.ch/ionchannels

http://www.neuroml.org/


Henry Markram has always been adamant that there are enough details already discovered and put into the neurophysiology journals. Of course, now you have to fish that data out and into some usable format.

I would tend to disagree. For example as recently as ten years ago everybody knew that most of the computation was implicit in the neural connectivity of the synapses. We now know that there is significant computation within individual neurons - in the dendrites of all things (previously thought to be pretty much passive carriers of output from other cells - just wires basically).

(See http://www.annualreviews.org/doi/abs/10.1146/annurev.neuro.2... for example).

Go look at the neurophysiology journals - people don't seem having problems finding new things to talk about ;-)


I think it's a big jump to say that what I said is the same as saying (as you put your interpretation of my comment) "there are no new things to talk about". Besides, if you were to just assume that there's no details available, then you will never aggregate them. But the reality is that there is a great deal of detail in the literature, which can be used to build (even incomplete) models. You could even look at the model to help inform the toiling grad students which neurons to poke at more.

Also, thanks for the paper. do you have others.


And another sort of "data": humans take one or two decades to develop into sensible persons; how are you supposed to educate a computer simulation? To a large extend for humans it also holds that garbage in is garbage out. We don't need massive simulation, we need understanding of fundamentals.


That is part of what we are trying to find out. We know a lot, but it isn't organized. Lets put together all that we have, and see what is still missing.


This is truly exciting - even if the project does not achieve the final goal of 'simulating' the human brain, the discoveries along the way will be invaluable.


I have a feeling that this is like an attempt to simulate a SOC computer on a quantum physics level based on a blurry microscope image of it in hope to play minesweeper on that. Basic principles (neuron behaviour) are hell of a complex and we must do a lot of dangerous simplifications, we know nothing about the firmware (pre-wiring given by the evolution of our species), finally the software will be completely missing (in humans it is created by few years of supervised training and self-improvement).

Why not to try from the top?


> we know nothing about the firmware (pre-wiring given by ...

Huh? That's what they are studying.


This would require dissecting newborns, so I doubt it. Anyway my point was that it is better to approach strong AI by improving and connecting silicon-based ML methods than to put huge amount of power and effort into imperfect neural abstraction hoping that it will automatically become concious.


> imperfect neural abstraction hoping that it will automatically become concious.

Consciousness is not the goal. What if your idea of consciousness is wrong? Even Wikipedia admits that nobody knows what consciousness is.


Ok, bad wording. s/become concious/acquire some higher functions of human brain/g


I get it, its interesting to understand our intelligence. But model a human brain? Why? Its pretty cheap to make them (and fun). They don't work all that well in the end - they're all bound up with food and sex and fear.

Why not be even a little ambitious, and make something an order smarter? The human brain size is arbitrarily bound by the female pelvis size. Presumable the artifical one won't have that limit.

And separate it from concerns of survival, paranoia, love etc. Maybe it could think straight, solve a few problems for us.


I get it, its interesting to understand our intelligence. But model a human brain? Why? Its pretty cheap to make them (and fun). They don't work all that well in the end - they're all bound up with food and sex and fear.

Because we still don't understand the human brain in many, many, many ways. Building a model of it and comparing that model to reality is going to help us better understand the human brain. That, in turn, will likely help us understand things like dementia, mental illness, affects of drugs, etc.

That's one reason.

There are others ;-)


Its really, really complicated too. Much of it is incidental to our evolutionary path. Simulating that is like, well, simulating all the cruft stuck to a piece of toast once you dropped it on the floor? Figure out anything at all about a human brain, like thinking for instance, you can skip the cruft and still have a monumental achievement.

In fact, the temptation to do that will be enormous. That's why the whole project seems fishy to me. Obviously simulating a subset is easier than the whole; any subset is pretty much a huge win; why are they talking about simulating it all? Sounds pie-in-the-sky.


The size of the brain is also bound by the huge amount of calories it consumes, and by how delicate it is. 5 minutes without oxygen? Boom, the organ you spent 20 years grooming is gone. Have the munchies? With your brain burning up to 30% of your calories, it may be the culprit.


A Terminator Skynet scenario is currently playing out in my head. I wonder how long after this human brain simulation runs it takes for it to become self aware and starts attacking us? In all seriousness, if this works this could potentially unlock so many mysteries and help solve & understand a plethora of medical issues like mental illness. We are living in extraordinary times.


I wish this meme would die. It really just taints everything we are trying to do with Strong AI.


Do you have any proof that Strong AI would be safe for humans?


The default outcome for AGI is likely bad (http://singularity.org/files/ComplexValues.pdf and http://singularity.org/files/SaME.pdf), but not in the ridiculous way that movies portray it, and no one should be using the movies as a substitute for thought (http://lesswrong.com/lw/k9/the_logical_fallacy_of_generaliza...). There are so many types of AI minds, and so many ways for it to go wrong (http://wiki.lesswrong.com/wiki/Paperclip_maximizer) that picking out a very particular fictional one to stress over, make movies about, and attempt to constrain (even framing it as a constraint problem of some more "primitive" desire to destroy humanity) as the only default mind, is pretty silly.


Not sure why this being downvoted. Provably friendly strong AI is actually sth that researcher care about, yet it is difficult to do because "friendly" has to be defined.


if a brain in a box becomes sentient, its attempts to "attack" us would probably look a lot like this: http://www.smbc-comics.com/index.php?db=comics&id=1847


Never trust a neuroscientist.

I doubt we will see an adequate simulation of a single human cell in our lifetimes much less a brain.


Well, what do you mean by adequate? I doubt we'll see anything close to a physically accurate (i.e. quantum lattice simulation) representation of a cell, but we already have models that are good enough to guide research before investing on (often costly) trials with actual cells. Furthermore, with things like these, I'm much less interested in creating a human brain than I am in creating something conscious on a human level, if that is even possible. Consciousness does not imply anything to do with accuracy to the human brain.


I'd be thrilled if we had a simulation that we trusted so much that we could consider skipping trials on actual cells.


[deleted]


Sorry for not being more clear. I know that's not what you said. I was actually answering your question of what I'd consider adequate.


Nah, my bad, I forgot what I had written.


If free will exists, how will it enter into the simulation (or emulation)?


Great question, and one which I would like to try and answer at length, but cannot spare the time...

Short answer: There are many definitions of "free will", and many arguments about the subtelties of the meanings of these definitions. AFAICT, a good argument can be made that making use of some random phenomenon in reaching decisions implements "free will" for the vast majority of these definitions (note that "random" does not need to mean "arbitrarily random"). This answer is not deeply satisfying to me, but a full elaboration of issues would need several pages of text.



"...an international group of researchers has secured $1.6 billion..."

An international group of researchers has scammed $1.6 billion away. And that's with EU taxpayers money.

Do you really think that anyone involved in deciding to allow that kind of spending has any idea as to where we're at nowadays regarding AI? Did any of them read "On Intelligence" (its author knowing more than a thing or two about AI)?

I'm sure not. And I'm not happy my taxpayers dollars are funding this.

I'm all for research and fundings going to research.

But this one is going to be a gigantic waste not leading to anything. And in ten years people will apologize and explain why "x is not AI", "y is not AI" and why it was a gigantic waste.

On a positive sidenote $1.6 bn for the duration of this project is peanuts compared to the yearly $140 bn the UE is spending ; )


There's more information about the project here: http://www.humanbrainproject.eu/ http://www.humanbrainproject.eu/files/HBP_flagship.pdf

The project is not actually as absurd as it sounds -- the funding is over 10 years, and it's funding a large number of different research programs in molecular neuroscience (8%), cognitive neuroscience (12%), theoretical neuroscience (6%), neuroinformatics (7%), medical informatics (6%), brain simulation (10%), HPC (18%), neuromorphic computing (14%), robotics (11%), and society/ethics (2%). About half of the budget is going to personnel / students, and the research is being done by a large consortium of established PIs.

Basically, they're creating a "European Institute of Neuroscience". IMHO, the way that it's been branded as as a giant "brain simulation" makes it look a little silly to other scientists -- but on the other hand it seems to have worked pretty well with the politicians.


It's beyond exciting that over a whole percent of the money is going to an ethics line.


Well EU needs to keep humanities grads employed somehow.

But seriously, this project has all the hallmarks of becoming the next Nanotech. As in overly broad, loosely defined, overfunded and with little practical output for the money.


This is such a weird comment I wonder if you were being serious or even read the article.

Are you seriously responding to an article about a research consortium (using a Nobel Prize-winning neuroscientist as an outreach person) by suggesting that they haven't read a book by a tech magnate and a science writer)? I mean, nobody would suggest that Jeff Hawkins is a slouch, but get real. These people are not bumpkins tilting at research money windmills because they haven't seen the light in the best-seller aisle.


You're sure that every member of a worldwide collection of top neuroscientists has no idea about where we're at nowadays regarding AI? Really?


Of course he's sure. He's a hacker. A maker, man. If we were that close to simulating a human brain there would be implementations up on github already, waiting for a $1.6B supercomputer to become available to run it.


> If we were that close to simulating a human brain there would be implementations up on github already

IIRC, the Blue Brain Project uses NEURON.

    hg clone http://www.neuron.yale.edu/hg/neuron/nrn


Indeed it does. If you want to see open source models of the brain check out: http://senselab.med.yale.edu/modeldb/ListByModelName.asp?c=1...


Being outside of academia, I am not privy to the historical reasons that modeldb is the way it is. Something about it has always bothered me.. why on earth are all of these models in different languages? I mean, these aren't exactly CPAN modules or metasploit modules. How are these supposed to be combined reliably? And what about unit tests? what is going on here?

Edit: oh, man :( http://rudylab.wustl.edu/research/cell/methodology/cellmodel...

http://senselab.med.yale.edu/modeldb/ShowModel.asp?model=642...

https://github.com/OpenSourceBrain/Thalamocortical/blob/mast...

last two lines are "sleep(5)" and "exit()" ... that's not how you do python modules :(


Expanding on neuroguy's comment, there's many different people working on making models and pretty much (<simplification>) the only things that matter here are the collections of transfer functions; given inputs, how do outputs propagate. You can see that in the C++ (well, C from the looks of it) example you gave, there's a time value, a timestep, a bunch of physical attributes, and a series of functions - no matter what the language or system used all the other models there have similar features.

When people look at the work of others they are less interested in the modelling system used and more interested in the model, which most are happy to translate to whatever system they are using as the very act of crawling through and translating from one form to another forces a certain kind of deeper look at the details.

It's on par with Watson & Crick using plasticine and paddlepop sticks for their model while others use ping pong balls and wire coat hangers ... further down the track everything gets unified but at the early stages one form of modelling is more or less as good as another.


It is for exactly this reason that the people of http://www.neuroml.org/ are doing their work. Good catch! To combine and place everything in one overarching language that can be used in any of the many Brain Simulators. There are many each with their own advantages, which is why there are so many different languages.


It would be worthwhile to get some more CS guys in the field. Agreed. Please join us :)


While I share some of your misgivings about simulating brains, you do have to remember that giving lots of money to scientists for basic research is bound to come up with something useful and might as well lead to serendipitous results, regardless of hubris or wild ambitions. If only grant proposals that would very likely lead to something would be accepted, then there would be no basic research. The value of basic research is only cashed out much later, if at all.


Their goal is not to create an AI, it is to create a model of the human brain. If you look at their website, you will notice the project is very much a neuroscience research project, not a computer science, AI, research project. The fact that the technical implantation of this project requires specially designed computers and highly algorithms (which will likely merit their own computer science papers), and the fact that this project might produce benefits for computing, does not change the fact that the goal is to understand the human brain through computer modelling, not to create a strong AI.


It seems to me that building a complete model of the human brain would necessarily be identical to creating a strong AI. If they build an incomplete model of a human brain then they have nonetheless created an (incomplete) strong AI.


If they manage to construct a complete model, they would have a strong AI. However, what they are trying to do is construct a model that incorporates everything we know about the human brain. From their, they can better observe how our model compairs to observed reality, and revise it accordingly. They can also test modifications to the model much easier. If they get to the point where their model is also a strong AI it would suggest that they have made great progress in understanding the human brain.


> identical to creating Strong AI

One of the wonderful joys of brain emulation is that you don't have to worry about "designing intelligence into it". Your goal is different; your goal is the human brain itself, without our historical baggage of abstractions like souls, minds, consciousness or intelligence. What if all of those ideas are wrong?


The Human Brain Project's aim is not to create AI - its primary aim is to create a detailed simulation of the Human Brain through big-data integration - in other words through extensive integration of already published data and through integration of strategically selected new-experiments. It is trying to tease out the 'rules' upon which the human brain is built. It has been shown by Henry Markram and his crew that there are in fact rules, which can be simulation, and can explain a great deal of things that would take literally thousands and thousands of experiments - see his connectome paper for an example of how they can predict the connectivity between neurons with a high accuracy - http://www.frontiersin.org/blog/The_Emergent_Connectome/66. The Human Brain Project is composed of a great deal of leading scientists from around the world and their institutions. Do you really think these people (some of which are nobel laureates) are less clued up than the author of "on intelligence". The most pertinent thing to get across here is that the HBP is not an AI project. It is a recreation of the human brain through biophysically accurate models.


"On a positive sidenote $1.6 bn for the duration of this project is peanuts compared to the yearly $140 bn the UE is spending ; )"

On another positive sidenote, the $140B the EU is spending is peanuts compared to the ~$3-4T the US Govt. is spending.


something tells me they are going to call it skynet


still no skynet comments?


Hopefully they don't put Windows on that computer..


I for one welcome our new supercomputer overlords.


I think this is a bad idea on so many levels. But all those aside, just think of all the other research that’s not getting funded because of these assholes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: