

IBM building Blue Brain (full brain neuron simulator) - andr
http://www.seedmagazine.com/news/2008/03/out_of_the_blue.php?page=all&p=y

======
pchristensen
This is amazing - a complete simulation of a section of cortical column. And
by complete, it's 400 subsimulations _per neuron_. Outstanding! Thanks IBM!

~~~
mixmax
This is one of the advantages of big companies: they can afford to do
expensive stuff even though there is no immediate guaranteed return.

~~~
pchristensen
Don't thank IBM, thank their clients that pay too much for consulting.

------
s3graham
Awesome (in the archaic sense) that the size estimate is only 200x what Google
has currently. Seems trivially within reach in, what, only 8 doublings of
performance? They guess at 10 years for "one machine", so what, 15-20 years
before it's pda size/priced? An extra full-real-thinking-feeling-creating-art-
type-brain in my laptop?! Holy shit.

I was actually just wondering why this project wasn't happening yet while
reading "The Singularity is Near". If you haven't read it, I recommend it.
It's hella dry and boring in parts, but overall worth reading if only to make
you think about what a startup might look like in only 5 or 10 years.

~~~
xirium
The central argument from the book The Singularity Is Near by Ray Kurzweil is
on his website ( <http://www.kurzweilai.net/articles/art0134.html?printable=1>
). It was almost completely ignored when it was posted to this forum a few
days ago ( <http://news.ycombinator.com/item?id=126183> ).

~~~
marvin
I think the reason most people are ignoring Kurzweil is that he sounds like a
crackpot. I happen to be a guy who owns an autographed copy of Kurzweil's
latest book, but I acknowledge that the idea of the technological singularity
sounds too good to be true. There have been thousands of cults proclaiming
that the end is near; it would be a coincidence of historical proportions if
this one happened to be right.

Kurzweil presents many convincing arguments, but the central tenet is that the
doubling of processing power keeps up for another two decades. Due to the
nature of nature, what we observe as exponential growth in the end invariably
turns out to be logistic growth. If this is so, then at any time the doubling
of processing power could slow, and this would mean that we would only, at the
most, get one more doubling. Such an event would bring the dream of a near-
term universe full of computers to a screeching halt.

I believe that we should be able to create machines that think, but it is
important that we aren't blinded by ideology. A massive "singularity movement"
that ends up disappointing people, will in the best case delay the advent of
such technology due to funding issues. I think the history of AI has
demonstrated this quite clearly.

If Kurzweil's estimates are correct, though, another 8-10 years of
"exponential growth" in computer hardware will put full-scale simulation of a
human brain into supercomputer territory. And frankly, I doubt that evolution
has managed to find the most computationally efficient way to achieve
intelligence. There's got to be a way to do this.

~~~
anewaccountname
The reason "chips" are called "chips" is because they are essentially 2d, as
soon as we bump up to 3d (though it will probably be more like a high surface
area crumpled up shape (not unlike the brain) for cooling reasons), I expect
at least a few more doublings.

------
bayareaguy
_The human brain requires about 25 watts of electricity to operate. Simulating
the brain on a supercomputer with existing microchips would generate an annual
electrical bill of about $3 billion. If computing speeds continue to develop
at their current exponential pace, and energy efficiency improves, Markram
believes that he'll be able to model a complete human brain on a single
machine in ten years or less._

I wonder how many megawatts Blue Brain will need to do something simple
simulating only 1e5 cells.

According to wikipedia,

\- the average power consumption of a human cell is around 1e-12 watts.

\- 50 years ago ENIAC needed 150 kW to do what you could probably do today in
a few µW.

Based on those numbers, I think Markram is way too optimistic about simulating
the brain on a signle machine in 10 years. If things continue as they have,
it's more likely to take closer to 50 years.

I'll pay more attention to Kurzweil's singularity when the difference in power
consumption between a tiny fraction of a simulated brain and a real one drops
below ten orders of magnitude.

------
truck
I wonder how they plan to do parameter estimation? Millions of nonlinear
dynamic models running, each with at least 3 free parameters in the simplest
of neuron models, with an almost infinite number of possible topologies of the
networks. That is quite the search space and neuroscience can provide very
little prior knowledge.

------
michaelneale
I watch with great interest. Is this how general AI will happen? or is it more
like heavier then air flight was: we will get there, we just need to
understand the fundamentals of what makes AI better.

~~~
nsrivast
I doubt it. Our modeling capabilities continue to develop much faster than our
knowledge of how the brain works.

Of course, neural network modeling can still be of great use. Since we've more
or less figured out how the brain does a variety of low-level processing tasks
(like coordinating muscle movements, finding the visual depth and texture of
observed surfaces, etc.), this knowledge can serve as a starting point for
developing exciting technologies at increasingly faster speeds (robotics
limbs, computer-guided navigation, etc.). Also, advanced neural networks can
serve as rudimentary "existence proofs" for theories of mind and
consciousness. For example, a theory that claims that our incredible computing
power stems from a recurrent network of thalamo-cortical neurons is
strengthened by a computer simulation that exhibits similar large-scale
behavior in similar time scales under suitable parameters.

That last sentence is pretty qualified, for good reason. We shouldn't expect
to write some code and discover a working brain, at least not until we figure
out how it works to begin with.

~~~
greendestiny
I think it's entirely possible we'll develop general AI without understanding
consciousness or even some of the other functions of the brain.

~~~
nsrivast
Why?

~~~
Electro
Any simulation running neurone interactions has the ability to produce
sentience without us knowing why or how it happened. However it would require
a phenominal amount of processing power than the simulated 'brain' would have.
Human neurones can have up to 10000 synapses connecting them together (on
average 7000 per neurone), which each synaptic interaction is a multi-variable
of chemical reactions, catalysts, reinforcers, inhibitors and stimulants.

I think our aim should be to simulate the R-Complex part of the triune brain
model. This is where our evolutionary intelligence came from and it is the
best basis to start. Birds have managed high levels of intelligence without
the neocortex present in mammals, so obviously there's more than one route to
sentience.

I believe brain density is the key to sentience. In a small animals, like
birds, they lose heat quickly and efficiently, which means their 'processors'
can pump out more heat without burning out. Where as a human brain would cook
itself if it was 'overclocked' as we're at the upper limit of the
communication-efficiency ratio, and Elephants are above that ratio but get
away with large processing and data storage with a form of 'low-energy'
brains.

With the speed of electricity in synapses, a Crow might be able to achive 10
times as many messages in the time it takes one of our brains to make 1
message. Elephants might only make .5 messages a second per synapse. So a crow
would benefit greatly by increasing brain density and be able to solve
problems to the same degree as a human infant, however humans have phenominal
storage in our brains and a crow simply doesn't have the brain mass to compete
so we have more data to act on than a crow. Elephants use their brains to
remember pretty much everything, they've even been seen taking short cuts that
they have 'guessed' between paths they've taken before.

I think the brain is astounding, but to reproduce sentience we should follow
evolution; not only because it gives us vast insights into how our brains
evolved, but also because it should avoid us ever getting into the irrational
AI situation. If an AI is made from simulating a human brain then it is human,
which it would be capable of understanding as it would have to be 'taught' and
I know I'm human because of the society I know around me that teaches me that
I think and act in a similar method as everyone else, just with different
data.

:Well, that should teach you for asking why.

~~~
michaelneale
The future will not be dull, we can at least be sure of that ;)

------
SuperThread
Why am I the only person who's terrified rather than excited by every new
incremental progression of AI? Do you guys actually look forward to the day
when humans are made obsolete?

PS: This scientist seems to be exponentiating incorrectly. In ten years he'll
have 32-ish times as much processing power; he needs 10 million times as much
to get to the level of the entire brain.

~~~
dusklight
Anyone watching the new TV show "Terminator: Sarah Connor Chronicles"?

It seems to me, whether or not to be scared about the progress of AI, depends
on a very simple question.

Should we treat AI like our slaves, or should we treat AI like our children?

~~~
anupamkapoor
> Should we treat AI like our slaves, or should we treat AI like our children?

more likely scenario would probably be, how _we_ would be treated by _them_.

------
moog
'Markram believes that he'll be able to model a complete human brain on a
single machine in ten years or less.'

Does this mean coding is in its last days? In future, will we just chat to
machines instead of programming them?

------
andr
Things like that make me think it's time to make Asimov's Three Laws* a legal
requirement.

* <http://en.wikipedia.org/wiki/Three_Laws_of_Robotics>

~~~
jey
Asimov's Three Laws are a plot device, not an actual workable system for
machine ethics. This is a very very important area for further research.

~~~
andr
They are a very good foundation for a workable system for machine ethics.

~~~
newt0311
Only if you agree with Kant's philosophy and not with the many other
interesting ones out there.

~~~
rglullis
Ooh... it is this type of topics that make me still come to HN. I am somewhat
familiar with Kant and Asimov's Laws seem like a good basis for AI.

But I would love to have some links to the "other interesting ones out there."
Would you please elaborate a little more? Any links?

~~~
jey
<http://news.ycombinator.com/item?id=129148>

------
DanielBMarkham
The day one of these simulators does something unique and interesting that
can't be done with other hardware -- I think we will have turned a very
important corner.

