
The brain's 5-million core, 9 Hz computer - liuhenry
http://biophilic.blogspot.com/2011/05/neural-waves-of-brain.html
======
archgoon
"Unlike transistors, neurons are intrinsically rhythmic to various degrees due
to their ion channel complements that govern firing and refractory/recovery
times. So external "clocking" is not always needed to make them run."

Transistors don't need a clock in order to run. They can in fact be set up to
create their own clocks. The purpose of clocks is for synchronization across
the chip so that we mere mortals can modularize the operation of a CPU. That
is, clocks exist mostly so that we can think in terms of sequential gate
operations (or from the programmer point of view, assembly code).

The author seems to confuse the chosen approach to designing computers (VLSI),
and the actual physical capabilities of a transistor. We have opted over the
last forty years to develop the CMOS logic gate way of organizing computers.
There are other ways, as the brain demonstrates, but it is not clear at all
that you can't do it with novel transistor topologies.

~~~
brudgers
Whenever I see comparisons of the brain to a computer along with the implicit
suggestion that computers will someday replicate brains, I am reminded of this
passage from Leibniz's Monadology and that today's computers are but
yesterday's mills:

 _"Moreover, it must be confessed that perception and that which depends upon
it are inexplicable on mechanical grounds, that is to say, by means of figures
and motions. And supposing there were a machine, so constructed as to think,
feel, and have perception, it might be conceived as increased in size, while
keeping the same proportions, so that one might go into it as into a mill.
That being so, we should, on examining its interior, find only parts which
work one upon another, and never anything by which to explain a perception.
Thus it is in a simple substance, and not in a compound or in a machine, that
perception must be sought for. Further, nothing but this (namely, perceptions
and their changes) can be found in a simple substance. It is also in this
alone that all the internal activities of simple substances can consist."_

[<http://philosophy.eserver.org/leibniz-monadology.txt>]

[<http://en.wikipedia.org/wiki/Monadology>]

~~~
ckuehne
Whenever I see Searle's Chinese Room argument, or a version of it, invoked to
show that brains cannot possibly be computers I am reminded of a passage of
Terry Bison's "They're Made out of Meat" [1]:

"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

"Meat?"

"There's no doubt about it. We picked up several from different parts of the
planet, took them aboard our recon vessels, and probed them all the way
through. They're completely meat."

"That's impossible. What about the radio signals? The messages to the stars?"

"They use the radio waves to talk, but the signals don't come from them. The
signals come from machines."

"So who made the machines? That's who we want to contact."

"They made the machines. That's what I'm trying to tell you. Meat made the
machines."

"That's ridiculous. How can meat make a machine? You're asking me to believe
in sentient meat."

[1] [http://www.eastoftheweb.com/short-
stories/UBooks/TheyMade.sh...](http://www.eastoftheweb.com/short-
stories/UBooks/TheyMade.shtml)

~~~
brudgers
Despite it's entertainment value, there is nothing in the passage which
supports the belief that the brain is a computer (or even a "meat computer")
or demonstrates why the analogy of brain as computer is logically different
from brain as mill. That's not to say that the complexity of a computer may
not create a more attractive analogy, but one must keep in mind that no matter
attractive Camelot appears on screen, it's only a model.

~~~
ckuehne
You quoted a statement Leibniz made about 300 years before Turing proved that
there are machines that can compute everything that is computable. I and
others explained in comments below, why we have reason to believe, that
computers are fundamentally different from mills and that Leibniz's argument
as well as Searle's fall flat. By the way, it is not the necessarily the
"complexity of a computer that may create a more attractive analogy". The
physical parts of computers can be less complex than those of mills. I wonder
how Leibniz would have reacted if he had been shown Google. Would he have
believed, for example, that the essence for its face recognition software must
be "sought in simple substance"?

For more rebuttals of the Chinese Room argument see its Wikipedia page. I like
this one in particular: A guy sits in a room and waves a magnet up and down
therefore creating electromagnetic waves. But you don't see light coming out.
So light cannot possibly be electromagnetic waves.

------
CountHackulus
Just a small nitpick. While the state of the art in x86 land might be 3GHz,
the IBM Power chips and I think the Z mainframes too, not sure on that, have
gone far beyond that speed.

POWER6 chips reached 5.0GHz in 2008: <http://en.wikipedia.org/wiki/POWER6>

POWER7 chips however have been clocked down to 4.25GHz:
<http://en.wikipedia.org/wiki/POWER7>

~~~
Symmetry
The limit to how fast you can clock a CPU is mostly due to how long the chains
of CMOS devices run before they hit a cocked latch. Having one CMOS drive 10
other devices takes longer than having one CMOS drive only 1, so there's a
standard metric called the FO4 (Fan Out of 4) that can be used to measure gate
delay. IBM has often put more effort into having low fo4s for its processors
(usually around 15 or) vs. x86 (around 20).

------
lallysingh
Hmm, so, if their metaphor really held, the brain's computation could be
simulated with a 45MHz CPU? Well, let's fix this up a bit...

(1) Give it 1000 clock cycles of cpu work to simulate a single neuro-tick.

(2) The clock is actually variable 5-500Hz (from the article).

So, 500Hz*5m = 2.5GHz, at 1000 cycles required, so 2500 Ghz of CPU power. An
amazon large-instance cluster box is 8 cores of xeon at 2.93 Ghz, about 110
cluster instances to simulate a brain?

~~~
IvoDankolov
Bare in mind that you also need to somehow do a brain scan with enough
resolution to map every single connection if you want something functional.
Also, what are you going to do about the sensory inputs (sight, hearing,
nervous system). I'm sure it can't be good for a person's sanity to suddenly
get completely 100% disconnected from the world.

And while a 1:1 map of the brain would maybe be feasible to do at the moment
or in a few years, but very expensive (although getting ever cheaper), it's
kind of like the straw-man AI researcher's answer to everything : do a neural
network, repeat training set and hope that it computes what we want. Trying to
flaunt your lack of knowledge and trying clever ways to avoid solving the
problem yourself is just asking for trouble in so many ways.

In the end, the point is that AI is a software problem. Adding hardware will
allow you a more stupid approach in solving it, but effective solutions are
those that matter most.

~~~
robryan
Imitating the brain may be a matter of having enough computation power to
allow a simple method to work. It reminds me of NLP where it seems we are
doing a full circle back to the simple methods like Naive Bayes as they are
showing better results than complex methods with enough data.

------
1337p337
This reminds me of Chuck Moore's 144-core, unclocked (!) colorForth CPU:
<http://greenarraychips.com/home/documents/greg/GA144.htm>

I wonder when he'll catch up to 5 million.

~~~
Jach
<http://colorforth.com/blog.htm>

"Here's GreenArrays' latest receipt of GA144 chips: 12 wafers; 14561 chips;
2,096,784 computers. And the fun is just beginning."

------
3pt14159
At the time of this comment this article has 35 points and 0 comments. Over
the past 6 months or so I've noticed a trend that the best HN worthy articles
often have points : comments ratios of 5 to 1 or less. This is a clear
indication of that.

~~~
ChrisMac
For what it's worth, in the case of this article I found it very interesting,
but since I only have a passing knowledge of the topics it's discussing, I
don't feel there's anything I can personally contribute to a conversation
about it.

It's still a great article though, and a lack of comments doesn't detract from
that.

------
guscost
I've been fascinated by Buzsaki and others in complementary fields since
learning of them, and I've also written down some of my own ideas on the
subject, from a purely theoretical perspective at least. I can't wait for what
we might see in the next few decades.

<http://guscost.com/2011/04/12/science-analog-confabulation/>

------
vl
My layman thinking on related subject of building brain-like computer is that
currently it's possible (although expensive) to build required hardware - i.e.
custom system that will have required number of connected electronic neurons,
but I don't see a way to "boot it up". Human brains essentially boots from the
DNA - i.e. it's layout and basic functions are predetermined by the DNA and
then it gets trained for specifics. Even if we can train machine brain, how do
we boot it up to the trainable state?

------
forkandwait
When we finally design a simulated brain, I wonder if (1) it will be really
good at spatial and behavioral behavior (balancing, motor skills, quick non-
explicit decision making etc), but really bad at doing math?

Not to say we shouldn't keep trying, but we all seem to think that the best
computer will evolve from solving lots and lots of partial differential
equations into being like the brain, but animal brains have evolved to solve
really different questions than why we have been making computers.

~~~
bermanoid
_Not to say we shouldn't keep trying, but we all seem to think that the best
computer will evolve from solving lots and lots of partial differential
equations into being like the brain, but animal brains have evolved to solve
really different questions than why we have been making computers._

The thing that always strikes me as interesting about the evolution of
intelligence is that it's a mostly unnecessary latecomer to the evolutionary
party: plenty of things thrive in nature without anything even remotely
resembling human intelligence, even plenty of large animals that function
quite well in complex environments rely mostly on hard coded "unintelligent"
systems in their brains.

The fact is, most of the problems that need to be solved to survive and
reproduce can be solved very effectively without much general intelligence.
Sure, once evolution discovers intelligence it turns out to be a very
efficient way to implement a lot of functionality that would otherwise need to
be hard coded, but it's not strictly _necessary_ , and nature did just fine
without it for a long time.

The good news: that also means that it's unlikely that significant
evolutionary pressures went into designing the complex systems responsible for
intelligent thought, so they were probably accidentally "designed" via random
drift. If you think about that, given the complexity of the algorithms
involved, it means that the algorithm space at that level complexity is
probably fairly dense with algorithms that function intelligently (where by
"fairly dense" I mean that it's a lot denser than we might assume otherwise,
and the solution that evolution came up with is probably not the only possible
one).

That means if we ever figure out how to do a guided search through algorithms
of the right complexity in the right way, we may yet stand a chance of
"accidentally" discovering intelligence rather than deliberately designing it.

~~~
Peaker
> that also means that it's unlikely that significant evolutionary pressures
> went into designing the complex systems responsible for intelligent thought,
> so they were probably accidentally "designed" via random drift

I find this unlikely. Perhaps the initial "push" towards intelligence was
based in random drift, but humans' evolutionary path has actually paid quite a
dear price to have the high intelligence that we have:

* Larger brains led to larger heads

* Larger heads led to the need to give birth earlier, and with more risk

* Earlier birth lead to helpless babies

* Helpless babies require much more care, and mothers become dependent on fathers

* Large brains are only very useful if trained for long periods of time leading to long, expensive and dangerous childhoods

I don't think this whole process can be explained by random drift.

I think the question of what caused pressure towards higher intelligence in
our evolutionary process can be answered by competition.

Analogously, consider tree heights. A tree doesn't need to be very high at all
to collect sunlight. But a tree competing with other trees can grow to
tremendous heights due to competition.

When competing for resources with other similarly intelligent beings, more
intelligence could definitely yield an evolutionary edge.

------
SeamusBrady
Some of the comments seem to revolve around the confusion of maps with
territory - CF Lewis Carroll's map "the scale of a mile to the mile" -
[http://en.wikipedia.org/wiki/Map%E2%80%93territory_relation#...](http://en.wikipedia.org/wiki/Map%E2%80%93territory_relation#.22The_map_is_not_the_territory.22)

------
programminggeek
The brain is also water cooled. Without proper water cooling it overheats
causing segfaults and a white screen of death.

~~~
ignifero
i disagree on the screen color

------
arapidhs
transisotr technology cannot emulate brain activity. something new and more
analog oriented suits better imho

------
asadotzler
nature's had a long time to sort out some decent hardware and software
configurations. let's follow with computers.

------
ignifero
While it's true that some cells, e.g. pyramidals in the hippocampus can
exhibit intrinsic oscillations, it's not true for most of the brain. Plus
rhythms usually arise in networks, not single cells, and require the network
to sustain themselves (that's why for example theta doesn't persist in vitro).
Is there an example of a cell that can generate rhythms?

~~~
cyrus_
You are correct, single neurons are not intrinsic oscillators in vivo as far
as anyone can tell. They could theoretically be -- there are ion channel
combinations that would put neurons into an intrinsic oscillatory regime --
but that isn't really seen afaik.

Oscillations in the brain arise due to excitatory-inhibitory loops.

