
Ray Kurzweil does not understand the brain - nollidge
http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does_not_understa.php
======
api
I'm a life-long programmer who presently works in software, but I studied
biology in college... mostly because I wanted to learn about learning by
studying how living systems do it.

Biological systems are nothing like anything we would ever engineer, and to
understand them we must remove our "anthropocentric engineer goggles" and look
at them for what they are. Analogies between biological systems and computers,
software, machines, etc. are _very loose_ analogies meant to illustrate a
point. Never take these analogies too seriously.

DNA is not a program-- it is a molecule, and one that may very well do things
at the quantum level that are biologically important.

The brain is not a neural network. It is an interconnected colony of living
cells of a variety of types, and it has been shown that all types of cells in
the brain are involved in cognition.

We are nowhere near anything with the parallelism or information density of
the brain. Getting close would take an advancement in computer technology of
the magnitude of the transition from vacuum tubes in individual boxes to a
32nm Core i7. Nature has had billions of years, and it is way ahead of us.
There's a nascent field called quantum biology that suggests that the brain
may very well be a quantum computer, so maybe quantum computers are the vacuum
tubes->ICs scale transition I am speaking of.

I think it's now possible with a big rack of multi-core machines in an MPI
cluster to approach the capabilities of a fruit fly's brain.

Cells are not machines. I don't think we have the language to really describe
what they are perfectly, but the closest I can come is "stochastic quantum
probability field device." A bacterial flagellum is not a "motor," it is a
quantum-scale chemo-electro-motive... well... our language breaks down. Like
Feynman said, don't tell stories. Just speak literally and then use math.

Actually, come to think of it, there's one engineering analogy that might work
for biology. Biology is quantum-scale nanotechnology. Yeah, that's pretty
close.

(On a related tangent, I've found that many engineers are sympathetic to
intelligent design type arguments against evolution. This is because they try
to think about biology like engineers and take these machine analogies
literally. It just doesn't work like that.)

~~~
jasongullickson
_Getting close would take an advancement in computer technology of the
magnitude of the transition from vacuum tubes in individual boxes to a 32nm
Core i7._

...so about 35 years? ;)

~~~
api
Maybe, though I am concerned about processing power stagnating.

Advancement is governed by economics as well as technical capability. There
must be _demand_ for new technology, or a field stagnates. Witness aviation as
an example... utterly stagnant outside of military niche applications.

People seem to no longer want faster and faster computers, and the market
seems to be moving toward lighter-weight lower-power portable devices like
netbooks, the iPad, etc. Those have _slower_ CPUs than current-generation
desktops. I suppose the extreme gamer and server/datacenter markets are still
driving performance, but for how long?

One problem is that programmers are not using the capabilities of current-
generation processors, partly because the dominant OS (cough Windows cough)
makes it horrifically painful to deploy desktop apps. This drives all
development to the web and turns desktops into thin clients. In the end this
kills demand for performance outside the datacenter market.

~~~
troystribling
You are correct for consumer electronics. But commercial and scientific
computing are driving increases computational power and density that do not
seem to be letting up. In the 4 years I have worked in hosting the amount of
server you can get in a 4u box has gone from 8 2.5GHz cores to 24 2.5GHz cores
and from 32GB to 256GB. The storage requirements and compute requirements for
applications are also increasing substantially.

------
rsaarelm
Myers is attacking Kurzweil on the assumption that Kurzweil is actually
proposing that the brain's structure should be reverse-engineesed from DNA,
which would be intractably hard. Only Kurzweil isn't saying this anywhere.
Instead, he seems to be using DNA as a measure for the amount of irreducible
complexity that needs to go into a system that will end up with the complexity
of a human brain.

Basically, we have no idea where we will get the AI source code we can
actually do something with, but we have some reason to believe that the most
concise version of the source code won't contain more data than the human
genome.

The rough intuition might be if we wanted to simulate brain in the caricatured
reverse-engineer-the-DNA way, we'd need an impossible computer that could
simulate years worth of exact quantum-level physics in a cubic meter of space,
but the human DNA and basic cellular machinery for hosting it would be the
only seriously difficult bits to stick in there, the rest would just be simple
chemicals and the impossibly detailed physics simulation.

I guess the analogy then is that we make the AI source code (which we don't
know how to write yet), which is supposed to end up at most around the length
of a human DNA, but which can run sensibly in existing hardware. Then,
deterministic processing entirely unlike the immensely difficult-to-compute
protein folding from DNA will make this code instantiate a working AI
somewhere at the level of a newborn human baby, in the same way as the genome
initiates protein-folding based processes that makes a single cell grow into a
human baby given little more than nutrients and shelter from the external
physical environment.

So it doesn't seem like a really strong statement of overoptimism. It's
basically just saying that the human brain doesn't seem to require a
mystifyingly immense amount of initial information to form, but instead
something that can be quantified and very roughly compared with already
existing software projects. I'd still guess it might take a little more than
the ten years to come up with any sensible code with hope of growing into an
AI though.

~~~
akeefer
The argument Myers is making is that while the DNA might be the _input_ to the
system, the total amount of data in the system is that input plus the rules
around how that input is interpreted/works. Those rules (for example around
protein folding) are currently encoded in biological systems as the laws of
physics, more or less, but they're insanely complicated and currently unknown.

So the point is that perhaps if you had a system that simulated all of the
laws of physics exactly correctly such that proteins folded and interacted
exactly right, only _then_ could you get away with an amount of input
equivalent to the amount of information encoded in the part of our DNA related
to the brain.

Actually encoding _those_ rules is probably the harder part of the problem,
and could easily take several orders of magnitude more work. (10x? 1,000,000x?
Who even knows).

~~~
ewjordan
1) An estimate of complexity is what it is - an estimate of complexity. It's
_not_ a claim that the way to achieve AI is to figure out the details of the
particular encoding that nature ended up using, so the precise nature of those
rules is not something we care about.

2) While it's true that those runtime rules (which we can kind of consider as
the "interpreter" for our DNA) are extremely complex, this has almost zero
bearing on the informational content in our DNA that is put towards creating
the "intelligence algorithm", whatever that is. Sure, there's probably a bit
of extra compression based on the fact that the physics allows some actions to
be "built in", but unless you believe that DNA is physically optimized to make
intelligent computer construction very concise, the logical content of these
computations is probably explicitly "written" in.

And it's hard to believe that DNA is somehow specifically optimized for
intelligence, because it was first used in completely unintelligent creatures
and appears in exactly the same form now.

Now, it _may_ be the case that DNA's physics are tailor-made to efficiently
code for useful _physical_ structures. But intelligence is a level of
abstraction above that, and we're all but guaranteed that very little
compressibility exists in the "language" for such higher level constructs.

What would an argument be without a strained analogy: if you're writing a
complex web application, the size of your application is roughly independent
(within an order of magnitude, for sure) of the architecture that it will
ultimately run on (where by "size of application" I mean the size of
everything that it takes to run it, interpreters, frameworks, etc.). Sure, the
binary size might be slightly different depending on whether you're writing it
for ARM, PPC, x86, etc., but not hugely different.

We would be _extremely_ surprised if on three platforms your executable
weighed in at 10 mb and on a fourth (which had a few different machine level
instructions) it compiled down to 10 kb - the only way we could imagine that
happening is if someone somehow "cheated" and embedded large parts of your
actual application logic into the processor, adding specialized Ruby on Rails
instructions to the machine code, or something like that. :)

Encoding and dynamics details may make differences in compressibility, but
past an order of magnitude, you're really talking about "cheating", and it's
an Occam's Razor problem to assume that nature optimized in such a way for
intelligence...

~~~
akeefer
I think the article's original point (and mine as well) was that considering
the size of the code (i.e. the DNA) as a measure of the complexity of the task
is totally disingenuous when you don't have (to use your analogy) the web
server, the libraries you're calling, the parser/compiler/linker for the
language, the operating system for the server along with its drivers/TCP
stack/etc., the processor it runs on, the mother board, or the storage. In
order to turn 10,000 lines of code into a web application, you need millions
of lines of code (and Verilog or whathaveyou) in terms of infrastructure.

The problem for AI is not just encoding the DNA, as it were, it's in building
all those other pieces around it. Estimating the complexity of building a
software brain based on the amount of information in DNA is like estimating
the complexity of building a web application using 1950's hardware. "It's only
10,000 lines of code! How hard can that be? All we have to do is write the
code, plus the frameworks, programming language, and operating system, plus do
all the hardware design."

~~~
ewjordan
_It's only 10,000 lines of code! How hard can that be? All we have to do is
write the code, plus the frameworks, programming language, and operating
system, plus do all the hardware design._

Except that DNA doesn't even come _close_ to being a high level language,
since the low level details were not specifically designed for compressibility
of the code (in fact, the low level details, the "bare metal ops", are pretty
much fixed by the for-all-intents-and-purposes random laws of physics, which
means we shouldn't assume that they enable any particularly high
compressibility ratios for _anything_ ).

So a more apt comparison would be if we saw an assembly language program in
some strange incomprehensible assembly language and said "It's only 10,000
lines of operations on the bare metal! Now all we have to do is figure out how
the hell the system this runs on works, and how we can translate that code
into a more sensible (and probably _vastly_ more compact) form."

...which might even be a _harder_ problem, to be fair.

Kurzweil's essentially proposed evidence of existence of an algorithm of
length N that does whatever it is we mean by intelligence. Which is fine, and
I think is probably correct (IMO, even his estimate about the minimal amount
of code it would take is probably too high, though that's another story).

But he's overlooking the fact that the mere existence of such a compact
algorithm doesn't help us find it at all, and I think a lot of the complaints
others have made about his statements are more aimed at that leap of logic,
not the existence claim itself. I completely agree that even brain scanning
tech might not help us simulate the important bits very well, even if we did
have access to that tech and computers fast enough to run the sims.

------
jerf
There's a psychological principle that I've forgotten the name of that
observes that by setting the stage correctly you can cause people to accept
assumptions without even thinking about it by framing the debate correctly.

This conversation here on HN is a great example of that. Simply by the way the
article is written, it is being taken nearly as fact by most participants that
a human-scale AI simulation must work by physically simulating the brain. This
may ultimately be true but there is no _a priori_ reason to believe it. The
brain may implement something that can be simulated "close enough" by a much
simpler computation system.

Chaos is chaotic, obviously, but the human brain is a pretty fuzzy system too.
It can't be too pathologically chaotic; people speak as if getting the 15th
decimal place wrong will blow up the system but the brain simply can not be
that sensitive or the removal of a single neuron would break our brains. Our
brain state must be at least metastable to work at all. Removing a neuron or
getting something wrong in the 15th decimal place may result in some small
change of behavior three years later vs. not removing it or getting it right,
but our brain states are already so fuzzy and noisy that's not going to be the
stopper.

The stopper will be to see whether or not there is a higher-level simulation
that can be run that is less complex that simulating the physics entirely. The
secondary question is whether we can make something that we would call _human-
intelligent_ even if it turns out we can never "upload our brains" without
critical data lossage occurring. That would be something as intelligent as us
that is nevertheless fundamentally incompatible with human biology, with
neither able to simulate or understand the other. I can make coherent
arguments either way, as can many people, but by framing the question as
physical simulation this has not been one of the more intelligent debates on
the topic we've seen here. Physical simulation is one possible path, and not
even the most likely or interesting, to AI and brain upload.

~~~
ars
No, people are talking that way because it's obvious to the participants that
no one knows how to do the higher level simulation. And there is no hope of
anyone figuring it out any time soon.

So people figure why not "run the program" that already exists, and that's
what this conversation is about.

------
yungchin
This rant unfortunately fails to clearly demonstrate how Kurzweil errs. It
goes on about complexity and emergence, but why would complex interactions not
emerge from a computer simulation just as they do in the real biochemical
system?

Nevertheless, my gut feeling, too, is that Kurzweil is mistaken. I can't quite
put a finger on it yet, but at least one problem I see is this: Kurzweil seems
to suggest that the observation that the genome consists in only 50MB of data
(after compression) somehow gives us an upper bound to the complexity of the
system. I'd however suspect it rather gives us a lower bound: factor in all
the epigenetics, external interactions, the not necessarily simple rule set
provided by physical chemistry (this is not in the genome, obviously), etc
etc, and the problem may be quite a bit larger.

Take for example the way we currently believe gene transcription promoter
networks to work. The combinatorial nature of those interactions means that
even though the underlying data is "only" a few megabytes, the system you end
up simulating gets very big very quickly.

~~~
mechanical_fish
_why would complex interactions not emerge from a computer simulation_

One answer is "They will, and surprisingly quickly. But they will be a
completely different set of complex interactions than are observed in the real
world, because of some roundoff error in the binary representation of the _N_
th digit of some apparently unimportant constant. Unfortunately, because the
system is complex, you'll probably spend the rest of your career trying to
track down that error, and fail."

Another answer is: They would, if the simulation was comprehensive enough.
Unfortunately, phase space is big. You just won't believe how vastly, hugely,
mind-bogglingly big it is. Seriously: Your mind reels when confronted with the
number of different molecular interactions going on inside the "simplest"
single-celled prokaryote, so you abstract it away, almost as a reflex, to stop
yourself from going mad. Then you abstract away the first-order abstraction.
Then you keep going. Soon you begin to imagine that you can model an entire
collection of a trillion organisms, just as a naive programmer imagines that
they can rewrite Windows in three days if only they use a powerful enough
language. It's a mere matter of programming!

~~~
yungchin
As for the first answer: we all have to learn to deal with roundoff error.
That's not by principle an obstacle.

The other answer contains an implicit assumption that's not obviously correct:
you suggest that complexity only arises when you enumerate every possible
dimension of phase space. But physical simulations have been very successful
at reproducing complex behaviour from simple rules, without taking into
account every particle's state vector.

Finally, did you really try to equate my statement to the statement that
Windows could be written in three days? ...

~~~
mechanical_fish
No, I'm poking fun at Kurzweil's 2020 deadline, not at you. You appear to have
been wise enough not to specify a deadline...

As for this statement:

 _physical simulations have been very successful at reproducing complex
behaviour from simple rules_

Absolutely, but it doesn't follow that _every_ complex behavior can be
reproduced from simple rules. To overgeneralize from success in one field is
the occupational illness of futurists. It's certainly a key problem for
Singularitarians, who tend to get so enthusiastic about Moore's Law that they
forget that most of the world has nothing to do with microelectronics.

~~~
yungchin
Haha, well I guess the fact that for a moment I felt that was condescending
mostly says a lot about me ;)

------
wihon
His last comment,

 _The media will not end their infatuation with this pseudo-scientific
dingbat_

chimes with the large majority of bold scientific claims that appear in the
press. For example, not that long ago the press jumped on Craig Venter's
(<http://bit.ly/uEC5>) 'artificial cell' (<http://bit.ly/c27AL5>), hailing it
as the beginning of man-made organisms and making bold predictions about the
future of life itself, riling up environmental groups no end. (I'm not saying
that wasn't a great acheivement. But all his team _really_ did was take out
one DNA tape and put back an identical, if newer version. Bread and yoghurt
manufacturers have been doing a smaller version of that for a long time. Not
exactly playing God.)

It would be nice if there was a scientific-bullshit detector that made sure
the press didn't go crazy over wild claims. Proposal for a startup, anyone? :)

~~~
tokenadult
_The media will not end their infatuation with this pseudo-scientific dingbat_

This is one reason that many, many times on HN when there is a link to a blog
post about some news story on a science discovery, I post the link to Peter
Norvig's article on how to evaluate research,

<http://norvig.com/experiment-design.html>

as it seems to be that most readers need more practice in critical reading of
statements about science. PZ is one of the few bloggers who knows most of
those points already, but he frequently writes about other people who forget
them, so here in this thread too I'll remind HN readers about Norvig's advice
on how to read about science.

~~~
wihon
@tokenadult that's some good shit. :)

This is _slightly_ off-topic, but here goes anyway. I've long had an issue
with the huge disparity between what humanities/arts people (including the
large majority of the press) know about science, and what scientists know
about the arts and humanities. Most scientists I know are more than able to
hold their own in a conversation about, say, a good book, but 99% of everyone
else I've ever met doesn't know/want to know the second law of thermodynamics.

I'm not pointing fingers here, I just think there's a serious lack of
communication between arts and sciences. I think it's partly this lack of
general scientific knowledge that makes the humanities/arts dominated world of
the press believe pretty much anything a scientist says. And then, to make a
good story into a great one, it's blown out of proportion. Ho hum.

------
10ren
I'm not so sure about the simulation, but Ray's right that the brain cannot be
more complex than the data that specifies it, speaking information-
theoretically. A biologist probably wouldn't get this, because he has too much
knowledge of how difficult and complex the actual translation is. It's a
mathematical idea, like non-constructive proofs, which freak out sensible folk
- and rightfully so. ( _EDIT_ no offence intended, they freak me out too)

The only other source of information is the non-genomic environment - extra-
nucleur DNA like mitochondria, and the womb (which is arguably already
specified in the genome, unless mother nature has done a Ken Thompson
<http://cm.bell-labs.com/who/ken/trust.html> at some point.)

But it's weird to claim that 50% of our genome encodes the brain. Really?
Perhaps it's just that 50% is required by the brain, much of it being
foundational to the whole organism (like standard libraries.)

~~~
KirinDave
> I'm not so sure about the simulation, but Ray's right that the brain cannot
> be more complex than the data that specifies it

Which would be true if the DNA specification for that particular part of the
body was the only part of what specifies the brain. Myers is pointing out that
assertion is patently false. The environment of developing creature and the
interactions between cell types and their environment (and themselves) is a
giant information content multiplier, and the DNA need not explicitly specify
any of this information for it to exist and be relevant.

Bringing this to a familiar compsci example; imagine software for creating
neural net recognizers. You can look at the source code for a net and say,
"This will take N inputs and produce N outputs" You can look at a finished
classifier and say, "Ah, I see what this does! It tells the airbags in this
car when to deploy!" But that's as far as you can go without the training data
that was used to train the classifier. This is a doubly good example because
it's often very difficult to determine HOW a complex neural net is doing what
it does, but it's fairly easy to explain how to train one to do that task.

~~~
sown
Is this sort of like having a pre-processor and system code to get a program
loaded, linked and running?

~~~
KirinDave
Perhaps, only the pre-processor would be in a feedback loop with the early
parts stages of the program life-cycle. Typically the linker and pre-processor
are one way operations. In a developing organism, the environment feeds back
on the cells developing and triggers new responses which causes new
environmental changes which causes new feedback.

Biology is full of bizarre examples of when this process goes awry and our
bodies have bizarre features. Dawkins's example of the Laryngeal nerve that
makes a crazy loop down into the mammalian chest cavity is the classic
example.

------
StavrosK
"he seems to have the tech media convinced that he's a genius"

Looks like he understands the brain just fine, to me...

------
c1sc0
About Ray Kurzweil ~ "He's actually just another Deepak Chopra for the
computer science cognoscenti."

~~~
mechanical_fish
I really understand where this article is coming from. Spend a decade in a
biology laboratory working day and night just to figure out a tiny subset of
the role of _one_ simple protein, and you'll find that pseudoscientific
handwaving no longer sounds like a pleasant breeze, but like the grating of
sharpened fingernails on a chalkboard.

(For physicists the equivalent torture is a movie, which goes by the name of
"What the Bleep...", that came out a few years ago. OMFG if you want to drive
me into a towering rage just show me ten minutes of that film. It's like
watching someone make spitballs out of the manuscript of the _Eroica_
symphony.)

~~~
arethuza
Wow - that "What the Bleep..." does look _remarkably_ bad.

The trailer is Poe's Law in action:
<http://www.youtube.com/watch?v=m7dhztBnpxg>

I guess you don't want to watch that again? ;-)

~~~
mechanical_fish
It is deadly dangerous for me to watch a clip from that film. It could inspire
dozens of hours of rage-filled but closely-argued physics lectures. Given the
slightest provocation, I will go all xkcd.com/386 on its ass and my actual
career will die of neglect.

Now, I need to go calm myself by fixing some bugs before I start to throw
things. ;)

~~~
arethuza
For what it's worth, I was having a conversation with my son (he's 11) about
the key differences between scientific and religious beliefs - I think I'll
use that film as an example of how it can sometimes be difficult to tell the
difference - especially when people present what are essentially religious
beliefs using terminology derived from science.

(NB I remember reading some Erich von Däniken books when I was 9 or 10 and
getting awfully excited - I was quite upset when I found that people could
just make stuff up and present it as science).

~~~
mechanical_fish
That's the extra-infuriating thing. In _theory_ , I'm completely down with the
project of using parables based on science to convey religious ideas. Religion
is built out of the raw materials that the world gives you. If we live in a
world full of science and technology, we should expect our parables and our
myths and our stories to be filled with science and technology.

(Cue a chorus line of _Doctor Who_ cosplayers.)

So, in theory, I could have been okay with _What the Bleep_. In practice,
however, it is just horribly grating -- way more grating than any SF, even the
dumbest SF.

------
heyadayo
My programmer's understanding of the argument is this:

Even if you could reduce the brain to some sort of bytecode, an interpreter is
still necessary to run that bytecode. For instance, a python program might be
a few bytes, but the interpreter is still a few megabytes. Yet both are
necessary to run the program. Who knows how large a brain bytecode interpreter
is going to be, but probably very large.

~~~
alisey
But you don't need to write the whole interpreter to _understand_ how that
small python program works.

~~~
rayval
Assuming you have the documentation, or assuming that you can extrapolate from
a program in a language that you already know.

If one knows python and is given a program in APL, then likely an
insurmountable barrier has been reached. If you don't have the docs the
describe the language, the one can try to infer the language by running
experiments on variations of the stored program. However one needs access to
the processor in order to run different experiments and get different results
to be able understand how the programming language works.

We don't have CPU in a form that we can experiment with ("brains in a vat").
We have a 50Mb string in APL*2, a mostly unknown language for a mostly unknown
processor.

The other part is that this is not a program but a meta-program -- meaning
there are multiple levels of indirection. The DNA does not directly specify
the brain, but instead specifies rules for components that would eventually
arrive at an assembly (guided by a rich context of voluminous other inputs
over an extended period of time) that constitutes a brain.

------
edtechdev
Hard to take this criticism seriously when it starts off with so many ad
hominems.

I haven't read Kurzweil's specific claims, but I'd guess he's claiming we can
simulate brain function by 2020. We can already simulate simple organisms and
neural networks. You can already accomplish quite a bit of complex emergent
behavior with those.

Simulating the _entire_ brain would require an enormous amount of processing
(though surely feasible some day, if not 10 years, how about 20), but most
likely we'd make some trade-offs and sacrifices and still get a close
approximation (like we do with virtually every simulation).

Of course simulating a brain and simulating a human are not the same thing.
You can't really avoid having to simulate the entire body and its interactions
with the environment.

And of course we wouldn't suddenly "understand" the brain just by simulating
it. It would still be the same complex system, it's just we'd be able to
inspect it more closely. Such a simulation would at least help us understand a
good bit more about the brain's role in our cognition and actions.

And yeah, who knows, maybe in 20 years we could have some kick-ass AI in
counterstrike.

------
sprout
Kurzweil is a brilliant scientist. Calling him a Deepak Chopra is just stupid.
He's out of his depth is all.

Also, I wouldn't bet against him building an AI that for all intents and
purposes appears to be conscious. Full-human simulation is another thing, but
with that we are all out of our depth, and it's pretty likely that a conscious
computer would be quite capable of understanding it better than us.

~~~
flatline
"Calling him a Deepak Chopra is just stupid"

That's not much of an argument. He plays on the standard things that all
religions play on: the promise of some form of immortality, which
instinctively plays on the fear of death; claims that the near future will be
radically different than the present; etc. His extrapolations of exponential
growth are not entirely unfounded but he applies the same logic to everything
that suits his fancy without taking into account that science and technology
have made large but halting steps forward throughout history, and the
fundamentals of scientific insight and discovery haven't changed. He talks and
sounds like an evangelical to me. No doubt he is a brilliant scientist but
that doesn't warrant the cult of personality that has grown up around him,
which I perceive as a dangerous thing contrary to the aims of science.

~~~
sprout
I agree that cults of personality are bad things. But dismissing someone as an
evangelist and overrated isn't much of an argument either. It trivializes his
very real and important accomplishments, and conveniently allows you to
dismiss all of his ideas without looking at them on their individual merits.
In short, ad hominem, and unwarranted.

Kurzweil has several functioning products that would have fallen into the mess
you dismiss as nonsense two decades ago. Is he wrong on a lot of things? Yes.
But so is any other engineer looking for things that haven't been done before.
Even if Kurzweil has a spell where he does nothing of any merit for a decade
(which has yet to happen) I'll still confidently say you're foolish to rule
him out as a crank. He's earned his right to dream aloud.

------
paraschopra
Actually the information needed for replicating brain is not just limited to
the information encoded in DNA. Physics plays a great deal of role in all
molecular biology and from physical interactions arise cell-cell interactions
which then give rise to complicated organs that you have in the body. (And, of
course, all these arose during billions of years of evolution which is encoded
not only in DNA but environment also.)

So if you were to simulate the brain, modeling DNA will be A tiny part of it.
Encoding the deepest of physics world (yes, quantum effects play many roles in
DNA) AND then having enough computational power to model interactions of those
particles in real time IS A REALLY BIG DEAL.

------
scotty79
> The design of the brain is in the genome.

Given how much computation takes to "decompress" information about what a
protein is built of into 3d layout of that protein (vide folding@home),
statement that you could "decode" half of the genome into working simulated
system is bold to say the least.

Trouble with simulating human 1 to 1 is not with how complex human itself is
but with how complex, bizzare and computationally powerful is physical
hardware on which program "be human" runs.

------
DrJohnty
I too have my doubts about reverse engineering the brain. As I see it
regardless of whether the Singularity arises or not this century will achieve
a rate of progress which is incomparable to anything we have ever seen before.
To get this in perspective just consider that in the nineteenth century more
technological breakthroughs were made than in all of the nine centuries
preceding it. Then in the first twenty years of the twentieth century, we saw
more advancement than in all of the nineteenth century combined. In this
century we will achieve 1000 times more than we achieved in the whole 20th
century which was itself a period of progress never before seen. Whether we
reverse engineer the brain or not the merging of human and machine
intelligence is an inevitability because you only need to look at how attached
we are to our iPhones and Blackberry’s to realise that we will ultimately be
unable to resist moving increased processing capability directly inside the
body. The consequence of that is massively increased capability whatever way
you look at it.

------
abecedarius
Kurzweil is reported as equating 25MB of compressed DNA to 1 million lines of
source code, which seems several times too low: if I compress the comment-
stripped source to one of my own projects, it compresses down to 8 bytes per
line of code as defined by sloccount. If we suppose compressed DNA has similar
real info rate to my gzip-compressed source, then 25MB corresponds to 3
million SLOC, not 1. If you include blank lines and comments, then more like 5
million.

For another take, <http://www.dwheeler.com/essays/linux-kernel-cost.html> puts
the Linux kernel source at 4 million lines. Can you compile a 25MB kernel?
Compressed? (How much of it is 'introns', anyway? This might not be the best
example.)

(Not going to touch the argument about applicability. It does seem to me that
whenever Kurzweil glosses over details, the closer look always appears less
utopian. There are other writers about these ideas who I can read without
having to check every assertion, like Anders Sandberg.)

------
Confusion
Myers is wrong, because he overcomplicates. Kurzweil is also wrong, because he
oversimplifies (I'm taking Myers's version of argument at face value; I
actually suppose the argument was more sophisticated than that).

Firstly where Myers is wrong: the human brain ultimately comes forth from a
bunch of information roughly equal to 1 million lines of code. If you could
reproduce those 1 million lines and set them loose, allowing them to construct
a human being (nothing else: what else could they construct?) and letting that
human being live in our world, you would have 'created' intelligence. It's as
simple as that and attacking that abstraction is completely the wrong approach
to pointing out the problem with Kurzweil's argument.

So, then the two points where Kurzweil is wrong:

1) it's not just any million lines of code. It has to be exactly the million
lines of code in our genome, give or take some bits. Considering the
enormously complex interactions between these bits of code, this is worse than
reverse engineering the largest codebase of spaghetti code you could possiblt
imagine. The simple example that Myers gives is enough to show this.

2) The million lines of code cannot just be executed anywhere. It encodes for
the construction of a human from raw material and the subsequent operation of
that human. Give it different materials or a different living environment and
something entirely different, in most variations nothing remotely capable of
'life', appears. And even if you could make it build something from electronic
components: slight differences in the perceptive systems can create huge
differences in the brain and the concepts in the brain. A machine with a
finite pixelized array of visual light receptors would build a completely
different conceptual model of the world. Reverse engineering the genome is not
only extremely hard: it is very unlikely to produce the result you want.

~~~
andolanra
Myers isn't trying to argue that the million lines of code aren't there or
aren't important. He's trying to argue that those million lines of code have
to run on hardware which is poorly understood at best, and that trying to
reverse-engineer the hardware is several orders of magnitude more difficult
than trying to understand the original programming. The hardware components
don't follow the law of superposition, either, so we can't just look at each
element in isolation. The entire thing has to be understood.

Imagine going back to ancient Roman/Greek/Etruscan/&c times and handing them a
ream of paper filled with the hexadecimal representation of an x86 application
compiled for a Windows environment, and then showed them what it looked like
when running. "Hey, look, now you can play videos and music!" Now imagine it
was several orders of magnitude more difficult than that, and you're beginning
to get the idea.

"If you could reproduce those 1 million lines and set them loose, allowing
them to construct a human being..." The exact point he was making was that
there's a lot of handwaved complexity in this statement, and that the
abstraction "understand human being program, run code" is abstract to the
point where it no longer accurately reflects the reality of the situation.

~~~
Confusion

      He's trying to argue that those million lines of code have
      to run on hardware which is poorly understood at best
    

He may be trying to argue that, but that isn't the central thesis of what he
_is_ arguing. His central thesis is that

    
    
      [The brain's] design is not encoded in the genome
      

and that's just patently false. There's only one single blueprint for the
brain and that's the genome. That the ways in which the blueprint is being
read, interpreted and carried out are not in the genome does not detract from
the fact that the genome is the blueprint. His arguments support the thesis
that a blueprint is not enough. The objections to Kurzweil that I list are a
summary/rephrasing of the arguments Myers provides. But he starts out by
saying "No, that isn't the blueprint" and none of the arguments support that
thesis.

------
nervechannel
The main thing Kurzweil has missed is the even though we something like the
'source code' for a human brain, we don't have the compiler.

If you're taking the (pretty weak) 'source code' analogy further, the compiler
is... A host human embryonic cell, running on a womb in a mother.

So thinking about the problem that way is a non-starter for obvious
chicken/egg reasons.

------
dilipd
Just from the description in the Wikipedia this is how I read Ray.

He is excellent in marketing, in sales, in networking & as an overall
promoter. All qualities we hackers need to cultivate as long as we control
ourselves & avoid pitching any vapor-ware.

Some examples of him promoting his product or himself. 1\. At 17, appeared on
the CBS television program I've Got a Secret - showed off software that
composed piano music 2\. International Science Fair, first prize for the same.
3\. Recognized by the Westinghouse Talent Search 4\. Personally congratulated
by President Lyndon B. Johnson during a White House ceremony. ... Incredibly,
it goes on and on. Very fascinating, if read with the heart of an
entrepreneur. His whole life seems to be a chain of fantastic promotions.

------
Aegean
The article has a point that Ray Kurzweil's claims are nonsense, but lets not
focus on his claims on genome-to-protein-to-brain cells mechanism. What's
rather more important is how the brain does what it does and how that compares
with today's computers.

There are 50 to 100 billion neurons in the human brain and the power of the
brain comes from the fact that you can create many orders of magnitude more
neural circuit combinations with those neurons. Each cell may be part of many
circuits, and learning involves the forming of these circuits. Now, lets
compare that phenomenal power with the power of the computer. It becomes
especially laughable when you say its 50MB worth of information.

------
rsaarelm
Incidentally, Larry Page used the exact same argument back in 2007:

 _My theory is that, if you look in your own programming, your DNA, it’s about
600 Megabytes compressed… so it’s smaller than any modern operating system.
Smaller than Linux, or Windows, or anything like that, your whole operating
system. That includes booting up your brain, right, by definition. And so,
your program algorithms probably aren’t that complicated, it’s probably more
about the overall computation. That’s my guess._

([http://pimm.wordpress.com/2007/02/20/googles-larry-page-
at-t...](http://pimm.wordpress.com/2007/02/20/googles-larry-page-at-the-aaas-
meeting-entrepreneurship-in-science/))

------
jarin
If you want to read some hard science fiction about how AI might be developed
and how an AI might think, I'd highly recommend the in-progress series Life
Artificial: <http://lifeartificial.com/>

------
trominos
This is like saying that the entire universe can be simulated by using a
fundamental theory of physics as source code.

It's (to some extent) _true_ , and potentially interesting philosophically --
but completely meaningless from an engineering perspective.

------
jongraehl
Kurzweil denies the accusation strongly: [http://www.kurzweilai.net/ray-
kurzweil-responds-to-ray-kurzw...](http://www.kurzweilai.net/ray-kurzweil-
responds-to-ray-kurzweil-does-not-understand-the-brain)

------
wglb
Hmmm. _Sejnowski says he agrees with Kurzweil's assessment that about a
million lines of code may be enough to simulate the human brain._ A million
lines of code is hardly enough to run a _car_ , for goodness sake.

------
rsheridan6
I don't find these criticisms convincing. We haven't solved the protein
folding problem? So solve it. Is there some reason to believe it won't ever be
solved? If you had a sufficiently accurate simulation of a cell or organism
(which we don't, but we might someday), you could do experiments on it much
cheaper and faster than in a lab.

I do agree that there's no freakin' way this will be done in ten years, or in
the 62-year-old Ray Kurzweil's lifetime, or mine.

~~~
cageface
There is Nobel-prize winning theory that shows that you won't ever truly solve
this problem by studying the underlying atomic physics:

[http://en.wikipedia.org/wiki/Ilya_Prigogine#Dissipative_stru...](http://en.wikipedia.org/wiki/Ilya_Prigogine#Dissipative_structures_theory)

~~~
rsheridan6
I'm ignorant of this theory, but reading the link I don't see that it shows
that protein folding can't be modeled.

~~~
cageface
You'd have to read the papers cited below to get the meat of the theory but
the gist of it is that you can't model proteins by looking one level down,
i.e. by studying only the properties of the atoms that make them up. Proteins
have emergent behaviors not entirely predictable from the behaviors of their
constituent atoms.

Many still believe in the reductionist idea that a perfect understanding of
physics would lead to a perfect understanding of chemistry and then biology.
This is not the case.

~~~
orangecat
_Proteins have emergent behaviors not entirely predictable from the behaviors
of their constituent atoms._

Define "emergent". I can believe that a protein's behavior is extremely
sensitive to the initial configuration of its atoms and that as a practical
matter we can't (currently?) get detailed enough measurements to predict
exactly what's going to happen. But without exceptionally compelling evidence
I'm not going to believe that there are _different_ physical laws for proteins
than for their atoms.

~~~
cageface
I suggest you read Prigogine's book if you're really interested but his work
demonstrates that the problem isn't having detailed enough information about
state, it's the irreversibility of time, which makes physics fundamentally
non-deterministic.

~~~
TheEzEzz
What has irreversibility got to do with it? There are cellular automata that
are irreversible but obviously deterministic.

~~~
cageface
It has to do with entropy and thermodynamics and the fact that living systems
are so far removed from thermodynamic equilibrium that they generate
unpredictable, emergent behaviors.

The math behind is pretty gnarly but if you want to understand it I recommend
his book: [http://www.amazon.com/End-Certainty-Ilya-
Prigogine/dp/068483...](http://www.amazon.com/End-Certainty-Ilya-
Prigogine/dp/0684837056) .

A CA is not a good model for this.

------
geuis
PZ Myers does not understand Ray Kurzweil.

~~~
Devilboy
This is what I got from it too. Ray is not an idiot.

------
chadmalik
Ultimately the criticism of hard AI that sticks is that its enthusiasts are
using a faulty "brain as computer" metaphor as the basis for their theories.
The brain and digital computers do NOT work the same (the brain is not
digital) and assuming they are, and that therefore clever-enough computer
models can replicate the brain, seems to put very intelligent / brilliant CS
people at risk of banging their head against a wall for decades.

More generally its fun to imagine things like this but also good to realize
real world limits as we live in a finite world with limitations such as living
700 years not being at all likely (or desirable, IMHO).

------
napierzaza
I heard an interview with him, and it wasn't very insightful. He says that
everything will get smaller and faster with varying detail or on specific
subjects. That's not very insightful.

~~~
tiles
I've heard him give the same talk at several different conferences, several of
them unrelated (Entrepreneurship conferences, Blackberry conferences, etc.)
Talk about exponential progress, Singularity U, mind uploading, and his belief
he'll survive to be 700 years old. He's a bag of fantastic claims and an
excellent speaker, and it embodies a lot of the best hopes of technology of
the present age, as well as our collective insecurity that we've come so far
and yet seem to have improved on the world so little.

But it's grating to see that he's getting so much media attention for such
blatant disregard of the human condition. No consideration what a post-
sentient-AI world would accomplish. No regard for what happens when people
live to be 700 years old and population growth doesn't slow. His ideas are
like genetically engineering society with no regard to the collateral damage
it'd cause to the societal environment. I don't dislike Kurzweil for being an
optimist, I dislike him for being arrogant.

~~~
cryptoz
> and population growth doesn't slow.

Population growth _is_ slowing. Dramatically. There's tons of evidence that as
women are educated, they will have fewer children. And women are starting to
get better educations all around the world. There's absolutely no doubt that
population growth will slow (slash, is already slowing down) a bunch in the
coming decades.

Second, I'm confused about the 700 number. Where does that come from? Does he
think that in the next 30 years, we will be able to extend life so much that
he can keep on living? Why does that stop at 700?! Doesn't he think that some
time in the _next 700 years_ we'd be able to find a way to live longer than
700?

Frankly, I doubt there's much difference between finding a way to live to 700
and finding a way to live until the end of time.

~~~
pavlov
Maybe he's planning to start bull fighting at the age of 690 and base jumping
at the age of 695.

------
CamperBob
We eventually learned to fly, too... but not by sticking feathers up our
butts, as Kurzweil is advocating.

------
Devilboy
Everyone is an expert on AI and the human brain today I see.

------
efsavage
Synopsis: Kurzweil is underestimating the brain. Myers is underestimating
technology.

------
masterponomo
Kurzweil has written enough by now that anyone hearing his interviews should
be able to put his ideas in the context of his predictions about the
Singularity. I have always thought Kurzweil hedges his bets quite well, and he
makes it quite clear that the capabilities about which he writes depend on
future gains in computing power. Kurzweil is only human, of course, and it is
well known that he is trying to extend his own life so that he will be here
for the Singularity. He very much wants the things he predicts to happen in
his lifetime, but I don't hear or read him as asserting that that will
absolutely be the case. Myers' unnecessary "refutation" (and insulting) of
Kurzweil reminds me of the reactions to the Segway. Kamen predicted that it
would revolutionize transportation; naysayers then piled on, ridiculing him
and saying that the Segway wouldn't work for commuting long distances on
American super highways. Well, no, but that's not what he meant by
revolutionizing transportation. I'm sure Myers is probably good at what he
does, and he is very much concerned with the now and the practical. We need
people like him, and we also need visionaries like Kurzweil and Kamen who look
ahead decades or centuries.

