
IBM announces advances toward a computer that works like a human brain - fogus
http://www.siliconvalley.com/news/ci_13809715
======
davi
<http://www.modha.org/> has much more detail:

"The model reproduces a number of physiological and anatomical features of the
mammalian brain. The key functional elements of the brain, neurons, and the
connections between them, called synapses, are simulated using biologically
derived models. The neuron models include such key functional features as
input integration, spike generation and firing rate adaptation, while the
simulated synapses reproduce time and voltage dependent dynamics of four major
synaptic channel types found in cortex. Furthermore, the synapses are plastic,
meaning that the strength of connections between neurons can change according
to certain rules, which many neuroscientists believe is crucial to learning
and memory formation. ... We were able to deliver a stimulus to the model then
watch as it propagated within and between different populations of neurons. We
found that this propagation showed a spatiotemporal pattern remarkably similar
to what has been observed in experiments with real brains. In other
simulations, we also observed oscillations between active and quiet periods,
as is often observed in the brain during sleep or quiet waking."

I'm pretty skeptical about this line of work. The thing that's interesting
about neural circuitry is not so much large scale population activity patterns
but transformation of information (e.g. construction of a receptive field,
place cells in hippocampus). As far as I can tell this work captures some of
the large scale oscillatory dynamics but says nothing about the fine-grained
information processing that is actually where the rubber meets the road.

\----

Edit: that said, the tools for large scale computation being developed here
could be an interesting foundation for further work -- given the right stimuli
and learning rules, can one recapitulate formation of receptive fields? If
yes, what would the relationship be between the simulation's implementation of
receptive field structure, and the implementations found in biology?

But the media hype is pretty off scale for what is, essentially, preliminary
work.

~~~
clevercode
I heartily agree with your skepticism.

In my opinion, rather than attempting to simulate progressively larger brains,
we should instead focus on really nailing the details of smaller ones, perhaps
starting with insects.

As an example, Portia jumping spiders display very complex hunting behaviors,
including trial-and-error learning in novel circumstances. Even with tiny
eyes, they have the visual acuity of a cat. However, unlike a cat, they are
only able to see a small portion of the visual field at a time, and must
therefore scan the environment, keeping the rest of what has been scanned in
memory. Portia spiders are known to spend up to an hour "analyzing" their
environment before deciding on a course of action! Their sharp eyesight allows
them to recognize the species of the spider they are currently stalking. One
common tactic is to pluck deceptively at the web of their victim in order to
manipulate it into reacting and moving into a more favorable position before
the attack. Different species of spiders will respond in different ways, and
so the Portia learn to intelligently apply different patterns of plucking to
fit the circumstances. When encountering an unknown type of spider, Portia
will try various experimental plucking motions, the successful results of
which will be remembered and reused correctly in future attacks.

This amazingly rich and adaptive behavior it somehow manages to produce with a
mere 500k neurons; about 2000 times less than the number used by Blue Gene in
the experiment.

------
JunkDNA
I'm always suspicious of anything that people claim "works like the human X".
In many cases (the brain especially) we have only the most rudimentary
understanding of how things actually work. Our ability to create something
that works the same way is therefore only as good as that rudimentary
understanding. That said, it's clear they are trying to use this computer to
understand more about brains. But whenever I read an article like this, I
can't help but think of all those grainy movies of early failed attempts at
manned flight by people trying to immitate birds.

~~~
wglb
I think you have nailed it. I don't actually think we have a clue as to how
the brain actually works. A while ago, we think it is like a telephone
exchange, then we think it is like a computer, then we think it is like the
cloud. The true advances in interesting computer results comes not from
modeling the brain, but more from low-grade mimicking of the evolutionary
process.

~~~
a-priori
_A while ago, we think it is like a telephone exchange, then we think it is
like a computer, then we think it is like the cloud._

There's a big difference between a metaphor and a theory. No one actually
thinks the brain works like a computer. It's just a useful metaphor for
working memory and other psychological phenomena. Quite far removed from the
neuroscience underlying it.

 _The true advances in interesting computer results comes not from modeling
the brain, but more from low-grade mimicking of the evolutionary process._

What does this even mean?

~~~
wglb
Biologically Inspiried Algorithms, in particular Genetic Algorithms and
Genetic Programming.

~~~
a-priori
I would argue that brain simulations are the ultimate "biologically-inspired
algorithm".

------
SlyShy
I find this a fascinating area of research, however a prediction of my dad's
always comes to mind. "A computer as intelligent as a human will take 18 years
to 'program.'"

~~~
mixmax
Not of it's a computer doing the programming

~~~
anonjon
If it is programming is it still a computer or is it a programmer?

Computer with a human-like brain seems to be a bit of an oxymoron in that
respect.

------
nwatson
quote: "Modha imagines a cognitive computer that could analyze a flood of
constantly updated data from trading floors, banking institutions and even
real estate markets around the world — sorting through the noise to dentify
key trends and their consequences."

... and we'll have at the end a pseudo-human opinion that also will totally
fail to see the housing bubble. It will also have opinions about the existence
of God, Ginger vs. Marianne, and cry because its haircut turned out poorly.

~~~
jimbokun
That quote along with

"A cognitive computer might also help soldiers analyze and react to chaotic
events on a battlefield."

seems to be there for the benefit of people controlling the purse strings of
the grant money. DARPA (or IARPA now) and potential backers from the financial
world (although, do finance folks actually finance this kind of research?).

It may actually work at predicting financial trends, for a while. Then some
trading floor will utilize that information to trade, and then their
competitors will, then these machines will totally fail to see some other
consequence of the complex trades they are making with each other.

------
randallsquared
There's a lower limit for how much processing power it will take to run a
human-level intelligence. The highest that lower limit can be is at the level
where we can simulate each neuron in a human brain; at that point, we can
build an AI merely by simulation. It seems likely, though, that by the time we
get to being able to simulate an entire human brain, we'll long have passed
the actual lower limit for human-level intelligence, if we knew exactly how to
build it. Assuming we survive, it'll be interesting to find out when we passed
the lower limit with supercomputers, when we understand enough to figure it
out.

~~~
a-priori
Right now, it's possible to simulate one biologically-accurate neuron at a
compartment-level (where you model a neuron by dividing it into many connected
compartments), on one CPU core in real-time.

Here's the kicker: it's also been shown that these simulations can scale
linearly with the number of processors (given sufficient interconnect
bandwidth). So with _N_ cores, you can simulate _kN_ neurons for some constant
_k_ (and I think that _k=1_ ).

This implies that the limit is the number of CPU cores you throw at the
problem. The human brain has around 100 billion neurons so, in theory, if you
had a 100-billion core supercomputer, you could simulate the whole thing.
That's a lot, but my point is that scaling a simulation is an engineering
problem.

This doesn't say anything about the accuracy of the model; that depends on
accurate characterisations of each part of the neuron, and of how they're
arranged and connected. But the key point here is that these are low-level
characteristics that can be individually examined, making whole-brain
simulations a tractable problem.

Source: Djurfeldt et al. Brain-scale simulation of the neocortex on the IBM
Blue Gene/L supercomputer. IBM JOURNAL OF RESEARCH AND DEVELOPMENT (2008)

------
ballpark
Man is working so hard to learn and recreate the human body. However, many
believe that the human body was created by chance and chaos.

~~~
gloob
I believe that hills were created by chaos and chance. Doesn't mean we're
incapable of constructing ramps.

~~~
ballpark
I think it's great that we're learning and exploring the human body, and
trying to mimic it in other areas. I'm really attempting to challenge the
beliefs.

*edit - wording

