
Simulating 1 second of real brain activity takes 40 minutes and 83K processors - gmatty
http://gigaom.com/2013/08/02/simulating-1-second-of-real-brain-activity-takes-40-minutes-83k-processors/
======
Xcelerate
The title statement is almost meaningless. It would be better just to say
"simulating reality takes a LONG time".

My research is in molecular dynamics. I simulate systems up to 1 billion atoms
on Kraken & Titan. HUGE approximations are made in simulating these systems,
but depending on what exactly it is you're studying, these simulations still
provide useful results. That is the key to all these studies: how well does
your approximation reproduce whatever it is that you're attempting to model?
In some cases, very well. For instance, I'm not going to get exact energy
levels of a large system, but the system will qualitatively evolve in the same
fashion that the experimental system does (which then guides the experimental
counterpart to the research). I don't know the details of this brain
simulation, but there is certainly some aspect of it that is not being
reproduced anywhere close to real-life, and hopefully this isn't what they're
interested in (and I'm sure they know that, but I don't think the article
author does).

The very best simulations of reality that we can perform handle at most just a
few H/He/C atoms. And an issue called the fermion-sign problem means that the
computational power necessary to simulate larger systems scales exponentially
with the number of particles. Unfortunately what that means is -- short of
developing quantum computers (which are polynomial order for the sign problem)
-- we aren't ever going to simulate more than a few atoms with near-perfect
accuracy, and certainly nothing like a human brain.

EDIT: Didn't mean for my comment to sound so negative. Obviously, the
researchers know exactly what they're doing. I was just trying to dispel the
impression that we're close to simulating the human brain.

~~~
dekhn
Generally, the people doing brain simulations like to claim that they can use
a massively reduced representation (little more than a graph model that pushes
signals around with some constants) and yet, somehow, the simulation is
inherently capable of reproducing something complex about thought. I'm not
sure how they get to this conclusion, but it's a massive reductionism. It
would be nice if it were true- it would mean that for a reasonable
expenditure, we could build a machine that, basically, "thought" and had a
"mind" and we could convince ourselves that it was "living" much like we think
of other humans.

To date nobody has come up with a really compelling disproof that they are
more or less correct- thought and mind seem to be best explained as "emergent
behavior of a complex system".

Now, whether can we bootstrap a thinking mind by running complex simulations
on computers remains to be seen. I'd like to believe there is some interesting
physics going on in brains we can't simulate with straightforward physical
models, but there really isn't any good evidence for that.

~~~
JamesArgo
When did reductionism become a dirty word?

~~~
sgentle
I've noticed this too. I blame a combination of Zen and the Art of Motorcycle
Maintenance and not knowing what the word actually means.

From my observation, it appears people assume "reductionist" means
"simplistic" as opposed to "representing a complex system as no more than the
sum of its components". To be honest, I think it's just an etymological
problem: "reduction" sounds bad, so "reductionism" sounds bad.

The grandparent here follows this pattern. Using the actual meaning, it would
be paradoxical for an approximation to be reductionist. In fact, it's quite
the opposite. A reductionist would hold that the most accurate high-level
model of a brain would be the most low-level model of a brain. If we could
model each atom precisely then we would, necessarily, model the whole brain
precisely.

The whole thing is pretty amusing when you take into account that anyone who
manipulates software or systems in any serious way needs to engage in
reductionism to be able to work. It's not as if "first, we write module A,
then module B, then a system magically appears from the ether" is a viable
architecture. At least, not since we stopped taking neural nets seriously.

~~~
bbgm
Reductionism is fundamental to science. There are places where it shows it
limits, but I've always equated being reductionist with being clever, with
taking a complex question and re-casting it in terms that can be
measured/studied using the current state of the art.

------
foobarbazqux
To put this claim in perspective, we are still unable to simulate a single
cell by modeling everything we know from biochemistry, even if we ignore
molecular dynamics. Mostly this is because we don't know what many genes and
the proteins they encode do.

This matters, for example, if we want to simulate neuronal plasticity or the
effect of antidepressants. We're an incredibly long way from a full
simulation, which would also require integration with the rest of the nervous
system.

~~~
deletes
To correctly model a cell you would need to simulate every atom and there are
about 10^11 to 10^14 (!) of them in a single cell.

I think an interesting number would be how many atoms does it take to simulate
one atom reasonably well. Then you could get correct results for some ( really
) small biological functions.

~~~
foobarbazqux
Usually people modeling cells don't consider molecular dynamics, they consider
concentrations of proteins, ions, and metabolites. A full MD simulation would
obviously be superior, but useful predictions could be made without it.

It's hard to say which is the better approach. On the one hand you have MD
from which you could technically build an accurate model "from scratch",
folding all of your own proteins as part of the simulation, but it is
exceedingly expensive to do that. On the other hand, you have a model built at
the level of biochemical reactions and molecular biology, which is
computationally feasible, but then you don't have enough information from
bench work to flesh it out.

~~~
dekhn
I used to do explicit MD with waters, etc, the whole shebang.

Massive waste of cycles. Never gonna make a useful prediction.

Reduced representations that contain the minimum of complexity to express the
phenomena of interest are far more useful.

------
beloch
If Moore's law were to continue to hold, this would mean the same simulation
could operate in real-time on an 83K processor cluster in approximately 25
years or on one processor in about 50 years. Of course, this simulation is 1%
of the neurons that a human brain has, and I don't know if scaling up to a
full-brain would be linear (This probably depends on the average length of
axons in the brain.). This completely unrealistic estimate also assumes no
improvements in algorithms, etc..

Perhaps a more interesting question is whether or not the brain can really be
simulated by a classical computer at all. If quantum mechanics plays a role in
the function of the brain we will need a quantum computer to do the job. A
classical simulation of a quantum brain would be a very odd beast indeed! It
might exhibit behavior that seems like intelligence but somehow falls short.

~~~
luikore
It's Penrose's view that a brain is quantum, but this paper
[http://arxiv.org/pdf/quant-ph/9907009v2.pdf](http://arxiv.org/pdf/quant-
ph/9907009v2.pdf) shows human brain is more classical than quantum.

Maybe the simulation algorithm is the thing that needs improvement. Some day
one finds a way to reduce the complexity an order of magnitude, then the era
of human brain is over.

~~~
Retric
~10 billion nerves * 10,000 connections per nerve * 100HZ * ~ 100
neurotransmitters = a complex beast to simulate even if you could get away
with 1 calculation per connection per transmitter and ignore things like
propagation delays and fatigue.

~~~
MacsHeadroom
Why calculate anything when we can make most of it happen in dedicated
hardware?
[http://www.stanford.edu/group/brainsinsilicon/about.html](http://www.stanford.edu/group/brainsinsilicon/about.html)

------
IvyMike
At what point does this become unethical (if ever)?

Before you answer, please consider how certain you are that you are not a
simulation.

Edit: I don't know the answer, and there probably is a large gray area. But I
do know this starts to make me uncomfortable the more accurate it gets. Time
to go reread Egan's Axiomatic again, I guess.

~~~
csallen
Even if you are a simulation, would you prefer not to exist? What, really, is
the difference between reality and a perfectly-executed simulation? It's hard
to condemn one without condemning the other.

~~~
ggreer
I think the GP is talking about current simulations, not future ones. Current
simulations don't have sensory organs, and I doubt the simulated neurons
accurately reflect a real brain at this stage. Most likely, early simulated
minds would be insane and in great pain, with no way to communicate.

Considering we experiment on animals to reduce human suffering (heck, we kill
and eat them just because they're tasty), running a tiny fraction of a
simulated brain for a few subjective seconds doesn't worry me much. Of course,
as computers get faster and cheaper this could become a huge issue.

It's only tangentially-related, but Robin Hanson has some good talks and
papers about emulated minds. The shortest summary of his views is probably
this talk:
[http://www.youtube.com/watch?v=9qcIsjrHENU](http://www.youtube.com/watch?v=9qcIsjrHENU).
A more in-depth version of that talk is at
[http://vimeo.com/9508131](http://vimeo.com/9508131).

~~~
ars
> early simulated minds would be insane and in great pain

Insane maybe, but why in pain? Why would you add pain signals without also
communication?

Even currently existing lower animals (lobsters and down) don't feel pain -
why would you expect a simulation to do so?

------
bhitov
I found the article misleading. Real brain activity was not simulated.

From the press release (emphasis mine):

"The nerve cells were _randomly connected_ and the simulation itself was not
supposed to provide new insight into the brain"

------
anigbrowl
Unfortunately, that's only for a number of cells equivalent to 1% of a human
brain. A full brain's worth of simulation would take 2.5 days....although that
much is probably not necessary. It will fall soon enough, though.

~~~
pervycreeper
The thought that we are maybe two decades away from having the computing power
to simulate a human brain in realtime (and much more than that subsequently)
is astounding, even in this age of technological wonder.

~~~
cabalamat
Assuming (1) it's parallelisable (2) this is an accurate enough similation,
and (3) a whole human brain is 100 times bigger, we'd need 40 _60_ 83k*100
processors which is about 20 billion.

We could do it now (hardware-wise), if we really wanted to.

~~~
mischanix
If each processor draws down 50 Watts, 20 billion of them would draw 1 TW. The
US uses on average ~3 TW across every power source (including e.g. combustion
engines).

------
ibudiallo
Maybe we are simply taking the wrong approach.

How many gears from a mechanical computer does it take to simulate a micro
processor?

If we were to one day to closely simulate the brain, I am sure the method will
not involve computers as we know them today.

~~~
cLeEOGPw
> How many gears from a mechanical computer does it take to simulate a micro
> processor?

Imo brains themselves are like mechanical gears compared to the potential of
processor.

------
jjjeffrey
Does anyone know what it means to simulate brain activity? Is the brain being
emulated in any meaningful way? Or is it just modeling the physical behavior
of neurons?

~~~
marcosdumay
From the article, those were simulation of our neurons, randomly connected.
The intention was to test the machine and software.

------
pothibo
Yes we are very far away from the human brain on many levels. However, I don't
like this kind of article because it doesn't make sense on the software level.

Mathematically, we could figure out how much computing power we would need to
match the human brain (24 bytes per synapse * number of synapse, etc).

I think I would be more interested in what was the specific of their
experiment.

\- What kind of software were they running?

\- Was the software bug-free(yeah right)?

\- Was the software optimized?

\- Caching?

------
kingkawn
Recent research has shown glial cells, the uncelebrated insulator of the axon,
in fact provide significant chemical signaling and modulation to the process
of neural activity. Only phenotypical changes have been observed, it's not
understood yet. Suffice to say that computational limits aside, we still
couldn't simulate a brain because we wouldn't know what to make.

------
pygy_
These attempts at brain simulation perplex me to no end.

Simulating a lump of tissue is definitely useful, it's when they start talking
about simulating a complete brain that I get lost.

A brain doesn't happen overnight. It is the result of a long developmental
process encompassing embryology, learning and experience acquisition.

How do they plan to wire the whole thing?

Micro- and macroscopic connections are well understood, it is the so-called
meso-scale (in between) that is troublesome.

There will never be a non-destructive way to do extract the information of a
live brain, and I'm not sure there will ever be a way to extract it at all.
The current methods[0] allow to extract either the wiring or the genes
expression.

[0] [http://www.brain-map.org/](http://www.brain-map.org/)

~~~
hannibal5
They use it to understand brain. So they wire it like the brain is wired.

If they simulate cortex, they wire arrange it into cortical columns (size of
each column is between 50,000 and 100,000 neurons and there is over two
million of them). Those columns have many different layers and neuron types.
Understanding how cortical columns and minicolumns work is very important.

~~~
pygy_
You made my point regarding small scale simulation, but completely missed the
rest.

 _> So they wire it like the brain is wired._

As if anyone knew how to do that. Even assuming that you can do both accurate
micro-tractography and extract cell-by cell epigenetic information out of a
single column, you'd still be missing other cricial information like
individual synaptic strength.

I'm not talking about cortico-thalamic connections (required to model
epilepsy) inter-columnar connections, and whatever regulation happens in the
white matter.

Relying on the probability distribution of connections as a crude
approximation is useful, but far from the real deal.

~~~
hannibal5
> but far from the real deal.

Your real deal has nothing to do with current brain science and these
simulations.

~~~
pygy_
With these simulations, definitely, that was the point.

With current brain science, call me perplexed. Could detail were you think I'm
off base?

------
npatrick04
This has been done before with a full human brain scale model in 2005. It took
50 days on 27 processor for that simulation. It's always good to have more
research being produced, but this linked article is a bit short on
information.

[http://izhikevich.org/human_brain_simulation/Blue_Brain.htm](http://izhikevich.org/human_brain_simulation/Blue_Brain.htm)

------
SeanDav
I would imagine that the analogue nature of the brain is going to always
entail a lot of approximation on simulation. I am not sure you could even
accurately simulate even 1 neuron.

Disclaimer: this is completely out of my field - just a layman speculating.

------
psadri
How many Moore law doublings before it is real time? Seems like ~23
generations or ~35 years. Of course, if you leave out the bits constantly
thinking about sex, we might get there a lot faster :)

------
mentos
This assumes that the neuron is the level of granularity that we need to
simulate?

Is there any research on what other chemical/electrical interactions might be
necessary to simulate the brain?

~~~
omarchowdhury
Yeah, you need to simulate an entire universe as well. ;)

------
Tloewald
1 second of the brain activity _we know about_.

------
frozenport
Are these processors or processing elements (PEs)?

------
nfoz
Where's the source code?

