
Ray Kurzweil Responds to "Ray Kurzweil does not understand the brain" - ankeshk
http://www.kurzweilai.net/ray-kurzweil-responds-to-ray-kurzweil-does-not-understand-the-brain
======
mcantor
I wish bloggers could have these kinds of debates without all of the butthurt
jabs like "... which so-and-so clearly does not understand..." or "I'm
surprised anyone actually pays attention to that kook."

If we are so very intelligent, then shouldn't we be smart enough to see that
ad hominem attacks result in emotional responses overriding the rational
desire to open our minds and learn, instead replacing those instincts with the
knee-jerk defensive reaction of tightening our hold on our existing beliefs,
right or wrong?

I understand that we're all passionately devoted to truth and understanding,
and we take umbrage when we perceive that someone is damaging those virtues.
But the kind of thinly-veiled high school drama aired in these two posts is
what disappoints me more than anything.

~~~
m0nty
Kurzweil, I think, comes off better in that respect, using phrases like "some
of my critics claim that I underestimate the complexity of the problem" which
neatly remove the personal jabs while still responding directly to the
argument. Myers's style is to get personal and it does occasionally get a bit
tired, but usually he gets away with it _because he's right_. Probably they'd
both get on a bit better if they actually met and had a reasonable discussion,
rather than relying on reports of what the other said.

~~~
bioinformatics
I don't think Myers is always right. Surely, it always seems that his
assessments are correct, until you look at the right angle.

~~~
kragen
Not always, but usually. In part this is because he's usually writing about
subjects (like evolution) that he knows a lot more about than his
interlocutors (like creationists).

In this case, the situation is reversed: he's writing about subjects
(artificial intelligence, information theory, and self-reproducing systems in
general) that he knows very little about, and that Kurzweil knows a great deal
about.

It's unfortunate that he didn't bother to find out what Kurzweil was actually
claiming, instead relying on some dumb-ass "science journalist", before
launching his little jeremiad. You'd think Myers, of all people, would know
better than to trust science journalists.

~~~
bioinformatics
I agree with you. I can only add that PZ Myers, unfortunately, for Science and
scientific blogging has become a buffoon, and some of his posts are just as
detrimental to the scientific community than most of his opposition.

It's not difficult to know more evolution than creationists, as most of
evolutionary theory is _just_ logic and facts, something lacking in most
creationist reasoning.

This is not the first time that he uses his mob-leadership to attack someone,
and it won't be the last. He is not a scientist anymore, he's just a blogger,
that for better or worse has cult followers.

------
midnightmonster
What I got from the previous article (admittedly as my own synthesis, not
afaict from the original text as such) was, sure, the program to build a brain
is only X Megabytes, but the computer that runs that program is the human body
growing in the physical universe. If you want to be able to run that program,
you're going to need to emulate at least the relevant instruction subset of
the physical universe. It's hard to know how large is the relevant subset or
if computers will ever plausibly get there. If the subset turns out to be
quite large, you'd need a computer that could model its own subatomic physics.

~~~
felxh
That is what I got from the previous article as well. However, from reading
Kurzweil's article get that this point is irrelevant, because he never
suggested to reverse engineer the brain in that way. He used the genome
argument as an estimate on how complex the brain is, basically viewing the
genome as a data compression of the brain.

Myers argument only shows that the decompression algorithm, i.e. the route
from the genome to brain, is insanely complex, but it doesn't say anything
about the actual complexity of the brain. So in essence, yes, if we would
choose to model the brain in a highly compressed form like the human genome we
would potentially need a computer that could model it's own subatomic physics,
but that's not likely the way we would want to approach this.

Anyways, this doesn't mean that the brain isn't very complex and impossible
for us to model at the moment. They main question is how long it will take for
us until this complexity is manageable (if ever).

~~~
ntoshev
> _the decompression algorithm, i.e. the route from the genome to brain, is
> insanely complex, but it doesn't say anything about the actual complexity of
> the brain_

This is where both Kurzweil and you are wrong. A very complex
compression/decompression scheme can achieve much better compression.
Intuitively, you can encode some of the information in the decompressor
itself, even if it is in a very abstract form. This is the reason why
compression competitions include size of the decompressor code, e.g.
<http://prize.hutter1.net/>

~~~
kragen
Even if the "decompression algorithm" is somehow optimized for making
intelligent brains easy to encode, the "source code" for it is still present
in the other half of the genome, so this doesn't affect the argument much.

~~~
ntoshev
It's not in the other half of the genome. All cells in an organism share the
same DNA, but different cells develop in a very differentiated manner
depending on the cells around them (e.g. should it be a neuron or a connecting
tissue?). The "decompression algorithm" is in the laws of physics,
biochemistry, and the way life works in general. Since we don't understand
these well enough, we can't make reasonable information-theoretic claims about
the "decompressed" genome.

~~~
kragen
It all starts from a single zygote. Every material in that zygote is either an
essential nutrient, a peptide made according to its DNA (including
mitochondrial DNA), or catalyzed from essential nutrients by those peptides.

The laws of physics and biochemistry are irrelevant, unless you're claiming
that a substantial fraction of the information needed to build intelligence
(we're talking tens of megabytes here) is encoded in the laws of physics or in
the biochemistry we share with snottites.

------
knowtheory
Amusingly i think that Kurzweil demonstrates his ignorance all the more
clearly in this post.

The brain is a product of not just the genome but the environment in which it
develops. You are not just the product of your genome. You are the product of
your genome and the womb in which you were incubated, and the environmental
stressors on your mother.

You can't go from genome => organism without a WHOOOOOLE lot of Ceteris
Paribus to fill in the gaps. Information theory is really pretty irrelevant to
the subject of developing a biological model of the brain.

To clarify (having read Kurzweil's post more carefully), a lot of the
interesting features of the brain are specified in the configuration and
connections of the brain. The base building blocks and types of neurons might
be specified in the genome (i'm not positive about that, i'm not a
geneticist), but even if that were to be the case, Kurzwiel would have to
demonstrate that environmental factors were not critical in the development of
the structures of the brain that make us thinking beings.

I don't think he can do that. Biologists and computer scientists have spent a
long time doing research on the building blocks of neural networks, and not
only do the fields still face some difficult and fundamental challenges, but
for the time being, it's very clear that we do not have the tools necessary to
interrogate the brain in the manner we would need in order to be able to map
interesting things out.

Perhaps things have changed in the 5 years since i graduated university,
but... i haven't heard anything earth shattering that would indicate we do
have the level of sophistication to computationally model the brain in an
accurate manner, or even promising starts to such an endeavor.

Additional Edit:

Kurzweils assertion that we can't predict how technology will change is
correct. Neither can he. Progress is a discontinuous non-linear process. It
may be that we'll be able to track all the particles in the brain and watch
them as the develop over time, but... then again it very well be that we won't
be able to. Tinkering with brains in invasive ways is difficult, let alone
_non_ -invasive ways.

~~~
illumin8
You're mis-characterizing his post:

"It is true that the brain gains a great deal of information by interacting
with its environment – it is an adaptive learning system."

Where does he say that the brain is only the product of its genome?

~~~
scott_s
That exact line illustrates that he does not understand Meyers' objections. He
thinks Meyers meant that we need to also model all of the information that the
brain gains as it interacts with its environment. That is not what Meyers
meant.

What he meant is that we need to model _everything that happens to the brain
before it becomes a brain_. That is, how the proteins interact with each other
in the physical world to form the brain. Meyers' analogy was that the genome
is the _data_ , and the physical laws of the universe are the program. People
are modeling how proteins fold and interact in the physical world, but it's a
hard problem, and we're far from being able to do it for an entire organ.

~~~
jasongullickson
Based on this post (and more so what I've been able to find of the original
discussion) I don't think he's overlooking this at all; if anything it's so
obvious that he's not calling it out literally, like saying "I'm building an
automobile"; you don't expect to have to say it's going to have wheels...

~~~
scott_s
That's because we already know how a car works. We know that wheels and an
engine are necessary, but a cupholder is not. Meyers' point, as I understand
it, is our understanding of the brain is not yet sophisticated to know what we
can exclude from simulation.

------
wolfrom
While I don't have the knowledge to dismiss Kurzweil's theories outright (I
only have a gut feeling), I must say that this response did not achieve a
refutation of the statement that "Ray Kurzweil does not understand the brain".

I did not see anything in that response that indicated that Kurzweil is basing
his beliefs on anything more than his 2001 adaptation of Moore's law to all
things technological; to me, the brain's biology and particularly its
physiology fall outside our current notions of technology.

~~~
raimondious
I agree — we can't simulate something we don't understand, and it's nearly
impossible to estimate how long it will take to understand anything. We don't
even know what consciousness _is_ , so how can we even begin to simulate it?

~~~
Symmetry
Really? Generally I mostly simulate things whose macroscopic behaviour I don't
understand, and I simulate them so that I can understand them better.

~~~
raimondious
It's true that we do that, but we at least understand the framework. Simulated
models are the basis of a lot of neuro research, but we need to do so much
more biology before we can begin to even try to simulate the brain in any
useful way on a macro level.

------
DanielBMarkham
I'm a singularian, but much more long-term that Kurzweil. Like 500 years,
instead of 20.

I think the much more interesting question here isn't "Can hardware simulate
the brain or not?" We'll figure that one out eventually. The interesting
question is "As hardware and software begin simulating the brain (already
happening), and integrating with it (already happening), what are the
implications for the species?"

What's a half-singularity look like? Because that's very well how this century
might turn out, and instead of arguing at the extremes, it's probably much
better to focus on the immediate practical implications of what's already
happening.

~~~
nkassis
The borg is one example of a half half. Star trek truly invented the future ;p

------
ulvund
"Something amazing will happen and it will resemble the human brain".

I imagine this is ridiculous to anyone working in Machine Learning, Applied
Statistics, AI or what it is called at the moment.

Point me to the algorithms that have the potential of resembling the human
brain, and I will have a look. Talk a lot about "computers are becoming
smarter" and namedrop some brain region names and a lot of technically minded
people will zone out.

~~~
emzo
There is strong evidence that the neocortex works on a common algorithm;
vision, hearing, touch, language, behavior, and most everything else the
neocortex does are manifestations of a single algorithm applied to different
modalities of sensory input.

<http://en.wikipedia.org/wiki/Hierarchical_temporal_memory>
<http://onintelligence.org/> <http://www.numenta.com/Numenta_HTM_Concepts.pdf>

------
arohner
To make the argument more clear:

Kurzweil says "The genome can be compressed into X bytes, so that's an upper
limit on the complexity needed to simulate the human brain."

Meyers says: "No. The genome says 'make a protein with this shape'. We don't
understand the full complexity of the brain until we understand all of the
physics (including potential quantum effects) that go into protein folding +
all the different environmental effects. Further, the information in the
physics and protein folding stuff is much much greater than information in the
genome".

------
nkassis
I at least agree with him that you don't need a trillion lines of code. A
couple of lisp macros should do it.

------
k0n2ad
Kurzweil's retort falls apart in several places:

"It is true that the brain gains a great deal of information by interacting
with its environment – it is an adaptive learning system. But we should not
confuse the information that is learned with the innate design of the brain."

He is misunderstanding Myers here - Myers is talking about the physical
ontogenesis of the brain during development (in utero), proteins interacting
with proteins (and the environment and such) during its development, not the
development of the brain through "adaptive learning"

"But we can take a much more direct route to understanding the amount of
information in the brain’s innate design, which I also discussed: to look at
the brain itself. There, we also see massive redundancy. Yes there are
trillions of connections, but they follow massively repeated patterns."

"Yes, the system learns and adapts to its environment..."

Again, Kurzweil is failing to address the crux of Myers argument, that the
design of the brain is not only in the genetic code, but in the intricate
"playing out" of cells during brain development. Myers did not talk about the
brain adapting to a system in the holistic or psychological sense, but on a
much more fine-grained biological level.

------
scotty79
> [...] It is true that the information in the genome goes through a complex
> route to create a brain, but the information in the genome constrains the
> amount of information in the brain [...]

This is false. Genotype-phenotype is not general purpose compression algorithm
so there is no limit. You could theoretically compress all information about
the universe in single bit. Algorithm for that would be: If you see one then
you should return full information about universe and if zero then you should
throw exception: "Error in input data."

Not every phenotype can be compressed to genotype. There is no DNA for animal
on 17 wheels or for a cheese-cake.

It is more similar to fractal compression or procedural generation where huge
(even infinite) amount of data can be "compressed" into simple rule.

If simple math can do something like that then complex machine of physical
interactions can do much more.

------
dasht
Kurzweil snipes "It is an argument from information theory, which Myers
obviously does not understand." With some delight, I would like to explain how
Kurzweil incorrectly uses information theory and arrives at a false
conclusion.

In fairness, I'll also explain why Kurzweil's main thesis appears to be
unassailably correct - albeit mainly because, in the end, he makes only quite
weak and uncontroversial claims.

(For biological criticisms of Kurzweil, see "knowtheory"'s comment.)

Kurzweil seeks to establish an upper bound on a quantity he calls "the amount
of information in the brain prior to the brain’s interaction with its
environment."

He is not terribly precise about how that quantity is to be defined. He does
tell us that his upper bound will show that the "design" of the brain isn't
very complicated and that it will not require "trillions of lines of code to
create a comparable system".

To establish his "amount of information" upper bound he looks at the number of
"bits" in a complete human genome. A human genome contains about three billion
base pairs. Each base pair position can have one of four possible values
(e.g., contains two bits of information). Thus he comes up 6 billion bits
overall, around 715 Megabytes (he says 800), and he points out that genomes
are far from random and asserts that the whole thing can be compressed down
to, perhaps, 50 Megabytes.

Apparently we are to believe that the amount of "code" needed to build a
system "comparable" to the brain can not possibly be "trillions of lines"
because the genome does it in less than 50 megabytes.

A zygote, in other words is a machine. It runs the code in the DNA. The code
tells how to build a brain. Our simulator will be some kind of programmable
system. We'll supply it with code. The code will tell it how to make or
simulate a brain-like thing. If a zygote does it with a mere 50MB of code, our
machine will as well.

It is just there, in that last step, that Kurzweil invokes a fallacy. In
general, if you have two different kinds of machines, and you want to program
each to compute the same result -- _upper bounds on the program size on
machine tell you nothing about the the upper bounds on the other machine_.

That is why, for example, when Chaitin lectures about Omega he is always
mumbling "relative to some choice of turing machine" (at least initially,
until it is then understood to apply throughout the talk).

If all you know is that the zygote machine and our simulator machine are two
machines - program size on the zygote tells you _nothing_ about program size
on the simulator.

There is, in other words, no abstract quantity that describes "the complexity
of the design of the brain". Information theory does not recognize any such
concept. There is no such thing as the irreducible complexity of a program
other than relative to a particular machine.

The upper bound from the zygote could actually translate to our simulation
machine _if_ we agree that the simulator will operate on principles
essentially the same as the zygote. Alas, Kurzweil says the opposite: "I did
not present studying the genome as even part of the strategy for reverse-
engineering the brain. [....] It is not a proposed strategy for accomplishing
reverse-engineering [....]"

Kurzweil has to give up either the relevancy of the size of the genome, or his
denial that we'll build a machine that operates on similar principles to a
zygote (or on principles that can be proved computationally similar to a
zygote). Either way, he should not be looking down at Myers' understanding of
information theory.

All of that said, if you strip out his hype his only substantial claims seem
to be that we'll build hardware that can run neural network software very
efficiently (perhaps in real time for brain-scale networks) and that we'll get
clues about useful network topologies from looking at brains.

That, my friends, is a perfect message for a "futurist" to deliver to a
dazzled audience because, well, to many of us it is what you call "very old,
somewhat boringly obvious news".

~~~
ewjordan
Sigh...where's Eliezer when we need him? :)

 _It is just there, in that last step, that Kurzweil invokes a fallacy. In
general, if you have two different kinds of machines, and you want to program
each to compute the same result -- upper bounds on the program size on machine
tell you nothing about the the upper bounds on the other machine._

Right on, upper bounds on program size do not directly translate from one
machine to another. So we can't set a hard bound, and we always need to speak
about information content relative to a given machine - too few people have
noticed this.

However, science doesn't give up when hard bounds prove impossible. There's
always statistics to fall back on.

And statistically speaking, we can _absolutely_ translate information content
claims across machine boundaries if we're careful, as long as we have no
reason to believe that the machine in question is somehow "special".

A priori, knowing nothing else about machines A or B, we would expect that if
we observe an upper complexity bound on an algorithm relative to machine A,
there's better than a 50/50 shot that the same upper complexity bound will
also hold relative to machine B (it's greater than 50% because it's almost
certain that our observation did not pick out the best theoretical upper
complexity bound).

As good Bayesians, exactly how this all translates into a best estimate for
the bound relative to machine B depends on a lot of stuff (all our prior
distributions - in fact, in cases like this talking about upper bounds is
silly, it's far better to talk about the distribution of upper bounds, but I
digress).

But it's _simply wrong_ not to adjust your complexity estimates relative to
machine B in some manner based on an observation of machine A, unless you know
something special about machine A that differentiates it from all machines in
such a way that it's well-suited to express the algorithm under consideration.

So I ask you this: do you know a fact about "zygote machines" that makes them
much better suited to constructing intelligence algorithms than most other
machines? Do you have some reason to suspect that their dynamics are more
likely to lead to intelligent computations than, for instance, normal
computers?

Because otherwise, Kurzweil is a _hell_ of a lot closer to the "right"
estimate than you are.

(Edit) FWIW, I posted a more detailed argument elsewhere in this thread
(<http://news.ycombinator.com/item?id=1621053>) that more carefully boils
Myers' claim down to just about the same question I asked you.

~~~
dasht
ewjordan: That's beautiful. Let me sort it out slightly and offer a counter
argument or at least response to part.

So, we agree that Kurzweil's "information theory" snobbery is wrong. His real
argument is "C'mon... look, the genome's small. Under a microscope the brain
looks like it has many parts but very regularly arranged. How hard can it be?
Trillions of lines? Please!"

That's not an information theory argument. That's a vague, hand-wave argument
to an underspecified but compelling-sounding conclusion. "Reverse engineer the
brain." "Build something similar." WTF could possibly happen that would prove
him wrong? How exactly could Kurzweil's "predictions" here be falsified?

Put another way, if he changed his rebuttal to Meyers to be "I made no
substantive argument or claim. Myers responded as if I had. Therefore Myers
has erred." -- I could accept that. It then does raise the question of why
Kurzweil was not more explicit about making no substantive argument or claim
but that's a separate matter. It's moot because Kurzweil isn't making that
rebuttal.

To your statistical argument, which is quite interesting...

I think the meat of your case is here:

 _So I ask you this: do you know a fact about "zygote machines" that makes
them much better suited to constructing intelligence algorithms than most
other machines?_

Rather than quibble over what "intelligence algorithms" mean I am going to
pretend you said "human brains or things very similar to such".

The answer is "Yes, I know quite a few facts that strongly support the
hypothesis that human zygotes are, among the machine-like entities in the
world, uniquely well suited to building human brains -- and very difficult to
replace by any substantially different kind of machine."

At this point I would basically just repeat Myers' argument, perhaps adding in
more specific but randomly selected details.

If you want to rescue Kurzweil by leaving the abstract "information theory"
b.s. he offered and looking at the specifics -- then we've gone full circle.
Kurzweil doesn't understand the brain.

Now there is an exception to all this. A different way of looking at it. I
think that if you scrape off layers of hype and clouds of obfuscation,
Kurzweil will consider himself to be proved correct if, in say 20 years, --
and just for example:

You can buy a box that has at least stereo video inputs and that, using an
artificial neural net, does exquisitely accurate facial recognition. Or that
can look at visualization of architectural CAD drawings -- millions of
variations per hour -- and select a few with a high percentage chance of being
pleasing to most people. Or that, in combination with the tactile-feedback
neural net, can manipulate agonizingly capable microscopic tools for very
delicate surgeries or for building nanotech or ...

I think it's f'ing obvious that (unless civilization fails - not out of the
question) we have all that in 20 years and maybe sooner. I think that was
f'ing obvious about 30 years ago, and to some, pretty clear even before that.

I don't think those devices count, in any reasonable way, as "reverse
engineering the brain" but I guess they count as "reverse engineering a few
select aspects of useful computational tricks that brains do in real time,
quite efficiently".

I don't think those inevitable advances offer much support to Kurzweil's
"exponential" technology - they are straight up incremental, boringly
predictable advances. I don't think they're a real challenge to ordinary human
predictive powers: we've seen this stuff coming for decades. Don't even get me
started on how "the singularity" is dangerous, meaningless, new-age clap-trap.

But, Kurzweil dresses up such banal observations to make it sound like, any
day now, massive machine mega-brains will take over the world. Hopefully
they'll love us and kindly upload our consciousness and.....

So it is, to repeat my original point, with some delight that I take on
Kurzweil's "information theory" sniping at Meyers.

~~~
ewjordan
_Rather than quibble over what "intelligence algorithms" mean I am going to
pretend you said "human brains or things very similar to such"._

I think this pretty well illuminates the source of our "disagreement" (and
resolves it), apart from this we're on the same page about the details, I
think.

To me, there's a very big difference between what I would qualify as
successful AGI (artificial general intelligence) and "human brains or things
very similar to such", in that successful AGI could work _very_ differently
from a human brain (and IMO, probably will, if/when we ever get there) and
still qualify as a bulls-eye hit.

Taking that as our (relatively wide) target, I don't think protein folding and
all that fun stuff adds much information.

However, if our target is narrower, simulating a brain or something close to
that, then I agree, the details of biology do shift the estimates
considerably.

Looking over one of Kurzweil's actual statements:

 _It is true that the information in the genome goes through a complex route
to create a brain, but the information in the genome constrains the amount of
information in the brain prior to the brain’s interaction with its
environment._

...it appears that you're right, he's talking more about creating an actual
brain than creating some algorithm that solves the same problem (possibly in a
different way). Mea culpa: I unwittingly altered his argument before defending
it, so I've really been arguing about what he should have said rather than
what he actually said.

Not all is lost, though: the fact that his claim _can_ be fixed up so that
it's got some actual teeth is still worth noting, and I think Myers criticism
is a bit too broad (he essentially claims that you can't say anything useful
based on the length of a DNA string, which is not true). I think the real
lesson here is that _if_ we want to use DNA length as a proxy for information
content, we need to be exquisitely careful about what "information content"
means relative to the systems we're discussing, and nail down exactly what
systems those are and how we're defining our problem.

~~~
jacquesm
> Taking that as our (relatively wide) target, I don't think protein folding
> and all that fun stuff adds much information.

It may not add information at all, just as the physical structure of a
transistor does not add much information to what a computer can or can not do.
In a computer the 'smarts' however are not even embedded in the connections
(like what's being posited is the case for the brain), but in the software.
Software is state, and software is, to all intents an purposes as good as
invisible, unless you find a way to do a dump of what sits inside a running
brain.

And then you _still_ have to reverse engineer the computer that that software
was running on in order to be able to resume.

Myers DNA length argument is easily disproved by reducing to extremes, if DNA
length would be an irrelevant item then you could encode a brain with an
arbitrarily short length of DNA, the fact that it takes up as much as it does
proves that it is from an information processing perspective not a simple
structure. So length is relevant, maybe not the whole story, but definitely a
factor.

Part of me wonders if we are going to find one day that the genome does not
just contain proteins but also contains the 'boot strap' state required to
power up a brain after its initial configuration is complete.

That's highly speculative but there is a lot more data in the genome than what
seems to be used and nature never was _that_ inefficient, even if there are
many things that could be improved on with the function oriented hindsight
that we have.

------
megaman821
I believe that when it comes to things in nature that look impossibly complex,
that they are really constructed using a series of simple patterns.
Understanding the strategies nature employs to create a thinking human brain
will yield much more usable results than understanding the bio-mechanical
processes which transform genes into a fully functioning brain.

------
terra_t
I've made an "all in" bet that AGI is going to be attained by the "human
memome project" long before Kurzweil's biomimetic boondoggle achieves anything
-- the biomimetic boondoggle is appealing to many, however, because the whole
society is accustomed to flushing hundreds of billions a year down the drain
on biomedical technology without any accountability.

~~~
arethuza
"human memome project" - do you mean top down symbolic AI projects like CYC?

~~~
terra_t
CYC is an important development in that direction, but a modern memomic
approach doesn't privilege top-down over bottom-up approaches.

Like the human genome, the human memome is already available (in natural
language) and the challenge is interpreting it. I don't believe the state of
the art is good enough to create an upper ontology that can capture human
experience, but sectors of ontology that capture chunks of it can be built
from the bottom up now and merged as necessary.

Think of it this way. I know something about quantum physics and I know
something about french lit crit. I don't need a general theory that
encompasses both of them until I actually need that general theory -- at that
point I'm going to build out as much of that theory as I need by an appeal to
thought, experience and experiment.

The immediate term strategy is to find areas in which "unreasonably effective"
strategies make it possible to extract facts and partially solve the
"grounding problem". It's important to recognize that this is essentially a
finite task: like the Earth's atmosphere, the Earth's noosphere has no
perfectly defined 'edge'... However, there's a principle of interconnection
and attraction of concepts that causes it to form a 'main body' that is
essentially finite. The game of 20 questions shows that, more or less, the
scale of human shared reality is 10^6 or so terms.

------
abecedarius
Did that 50MB figure derive from just the protein-coding part of the genome? I
just tried out the sequence from
<http://hgdownload.cse.ucsc.edu/downloads.html#human>. After converting it to
binary, 2 bits per base pair, bzip2 compresses it down to about 650MB (from
715MB uncompressed). I'd like to know what assumptions give you an order of
magnitude greater compression, and how probable they are. I suppose the
biggest factor has got to be junk DNA, since species can vary so much in
genome length -- but I get the impression there's a lot of uncertainty still
about functional noncoding DNA.

(The other number estimated was how many lines of source code the compressed
50MB or 25MB might correspond to. In the last thread I couldn't get this
number to work either, though it came closer; and I actually know something
about programs.)

------
mrpsbrk
I have this kinda particular interpretation that "the best science is the one
that makes fewer assumptions", or that, at least, is better equipped to access
and deal with its own assumptions. In that vein, what bothers me about
Kurzweil is that he seems to be just taking some assumptions and running with
them.

Specifically, i believe that he assumes that intelligence and computation are
the same thing. At least, he asserts that both are equimaterial --- that a
sufficient powerful calculating machine is bound to be able to generate
intelligence.

I do not know if intelligence == computation.

I do accept that, in many domains, the two things are interchangeable. Like,
for example, if you are interviewing for a programming job and you can't do
division in your head, that is a bad sign.

But if we are looking to _build_ intelligence _from_ computation, then i think
we will bump into any differences that exist.

If computation and intelligence are equal or at least similar, that would be
an interesting fact. It would teach us a lot about ourselves. To my taste it
is a very interesting line of questioning. But it is not proven.

Really, we can't even really define intelligence! (As a sidenote, i think that
is exactly the point of the "Turing Test", not to prove AI, but to show that
intelligence is not clearly defined.)

If intelligence is a kind of computation, then Moore's Law means AI,
definitely. And in that case, Kurzweil estimate of 2 decades is as good as
any. If intelligence != computation, then some completely unrelated discovery
has to intervene.

There is one thing that makes me doubt the assumption of equality, though.
Namely, computers are already extremely better at computation than we are. I
am 30yo and i can't recall a time when i didn't have available calculators way
more powerful and fast than myself. If the translation from computation to
intelligence was straightforward, my feeling is that the exponential nature of
Moore's should have already made AI a reality before i went out of college.

------
lancerp
There may be only 1 bit of information describing the brain, it doesn't
matter, the bits of information do not have a one to one correlation with
logical gates or anything else for that matter. It is silly to compare genes
with "lines of code", you also need a machine to parse, and eval those lines
of code so you should also include the instructions for that machine as well.

------
postfuturist
Given the idea that consciousness is related to complex quantum interactions
at the molecular level, and the fact that computing power hits hard limits
with regard to physical complexity at nano-scale, it seems unlikely that we'll
be modeling human brains well enough in 20 years to reproduce conscious human
thought.

~~~
ewjordan
_Given the idea that consciousness is related to complex quantum interactions
at the molecular level_

Yes, that's an idea; it just so happens to be (at least outside the world of
philosophy, where it's still easy to find people that don't believe in special
relativity...) a fringe, unpopular one with pretty much no evidence whatsoever
to back it up, because there's still no consensus on what "consciousness" even
means, or whether it's definable even in principle, let alone measurable.

But in any case, consciousness has absolutely nothing to do with strong AI,
which is defined in terms of what the AI can do, not whether its internal
states are conscious or not.

 _it seems unlikely that we'll be modeling human brains well enough in 20
years to reproduce conscious human thought_

Sure thing, I can agree with that, practically as a matter of definition.

Luckily, in the practical AI community "conscious human thought" is not what
anyone's trying to create. They'd be perfectly happy with non-conscious, non-
human thought that can learn a wide variety of things, regardless of how it's
implemented.

~~~
postfuturist
Fair enough.

I guess I don't understand what non-conscious thought is supposed to be,
exactly.

~~~
kragen
It would be great if there was an algorithm that would pick out the best 15 HN
comments for me to read each morning, instead of relying on other people's
votes. That algorithm would necessarily have to process human language at a
high enough level to distinguish good from bad. But would it necessarily have
to be self-aware, considering the absurdity of its existence in its off-hours?
We don't know. Certainly there are programs today doing many tasks that once
we would have thought required a self-aware entity; composing listenable
music, translating badly between human languages, and part-of-speech tagging,
to name a few.

------
koeselitz
[http://scienceblogs.com/pharyngula/upload/2010/08/thinkingme...](http://scienceblogs.com/pharyngula/upload/2010/08/thinkingmeat.jpeg)

------
willfully_lost
So to some up: "Yes, the brain is very complex, but you don't understand the
power of exponential growth in information technology!" Surprising.

~~~
adbge
That should be "to sum up".

------
erikpukinskis
This whole argument seems to be about whether you need to include the design
of the factory in the specifications for the chip.

------
10ren
I'm writing this to try to clarify my understanding. I think this is the
essence of Kurzeil's argument on estimating complexity:

Let's take the genome (DNA) as a program + data, and the phenome (the
organism) as an output (and assume the mother is in adequate health, and
development proceeds normally.) Then the number of different possible phenomes
is limited to the number of different genomes (it could be fewer, if there are
non-significant regions of the genome, because then more than one genomes
could produce the same phenome. That is, the function G->P is not necessarily
injective.)

While this doesn't directly describe the complexity of a phenome, the argument
is that the complexity of a thing is no greater than the means of defining it,
and that any additional complexity in the resulting thing must contain
redundancies (perhaps very hidden) that can be eliminated. A simple case is a
program to print hello 1000 times. The program is short, and though the output
is long, it contains redundancies. What about from chaotic systems and
fractals, or 'normal' numbers like pi, where great complexity arises from
simple rules? The argument is that this is merely apparent complexity, and in
reality contains great redundancy. Big output changes from small input changes
doesn't disprove this; consider changing 1000 to 2000 in the above. While a
particular sequence of pi digits, or a particular fractal frame, might seem
complex, there is also the input to consider of the specification of that part
(eg which digits) also takes information.

\---

Here's the theoretical flaw: considering a genome as a program, what if the
entire program isn't really listed in the genome, but it calls library
functions? Obviously, it becomes much shorter, but we're not measuring the
library code, so it's cheating. Or, what if the program is written in a
highlevel language rather than a lowlevel one (like assembly) - this is
equivalent, if you consider the syntax of the language as causing in function
calls. Clearly, we still have the same mapping of program->results, and the
number of different results is still limited, but those results are much more
complex than the program; they programs don't represent the complexity of the
output.

How does this apply to the development of a phenome from a genome? Does
anything add information, like standard libraries? One might say that the
mother is like a standard library - but little information seems to be input
in this way (consider development of a chicken egg); and fundamentally, the
mother is also a phenome that can be specified by the very genome in question:
it's self-hosting.

Does protein folding add information? It is very complex, but is that just how
it operates (the way that the implementation runs), or does it also add
information to the output? Remarks on this seem to say only that it's complex
and unknown and therefore hard to simulate. I'm talking about whether the
complexity of action adds information to the output. If so, how much
complexity? Is it significant, comparable to a standard library, or is it more
like a trivial macro?

I don't know how much _information is added to the phenome_ by protein
folding; this is a question for the biologists. I think addressing it squaring
in these terms would defuse much of the emotion in the discussion. But I'll
guess:

If we assume protein folding is dictated by quite a long sequence (ie. it's
not a context of one, like G->down, U->up, A->left, C->right, but a function
of say 100s of bases -- and of course the fold direction is not always 90
degrees), then there is scope for an (almost) arbitrarily complex function,
from sequence->fold. The complexity is limited by the input (number of bases
involved) and output (the actual fold).

If we then assume that the shape of the protein is the crucial thing for its
interaction with raw materials and other proteins (eg. as an enzyme), then
complexity of protein folding does directly translate into complexity of
results - even at this, the finest-grain level of operation.

Although protein folding potentially adds complexity, I find it hard to
imagine that it would add information comparable to a standard library, such
as, say 50 bases specifying an eye or a liver (which a standard library might
do, like Python's SimpleHttpServer). That would be miraculous, if the laws of
physics were so favourable to the particular needs of organisms (like a
programming language that is customized to a particular application, as modern
libraries contain code for TCP/IP and HTTP.) I find it easy to believe that
it's more like the variation in syntax between (say) lisp, java and assembly.

So, I don't think protein folding adds much complexity to the result; it's
more like a _general_ purpose programming language than a set of _specialized_
library functions (I think this question is the crucial issue in the debate.)

\---

To conclude, I think the complexity of the genome _does_ estimate the
complexity of the phenome: I agree with Kurzeil that the brain is (roughly) as
complex as the sections of the genome necessary for its development.

~~~
ewjordan
_So, I don't think protein folding adds much complexity to the result; it's
more like a general purpose programming language than a set of specific
library functions._

This is dead on.

The reason that library functions enable great code compression at higher
levels is that they are designed _specifically_ to enable code compression for
the expected high level uses. Library designers pick out a small subset of all
possible programs, and write their library code so that those programs can be
expressed efficiently.

Unless protein folding has, by sheer chance, ended up working _just right_ to
make wiring up an intelligent brain especially easy (and I'm not talking about
the physical wiring, but the logical wiring), then it's silly to expect that
it adds much information.

On the other hand, the complexity of protein folding _is_ vitally important
for efficient evolution of any structure, and this is something that we need
to take to heart: even if the complexity doesn't add much _information_ , it
allows small differences in the genotype to create massive differences in the
phenotype. This shuffles regions of the fitness landscape around, so that in
certain areas evolution is not so much climbing a hill as it is bouncing
around atop a frothy unpredictable fractal. It's a fantastically clever way to
avoid getting stuck in local maxima while still retaining a lot of the
advantages of local search, as long as your fitness landscape has enough high
points on it and the shuffling is not too severe.

------
mkramlich
I'd be more impressed if Ray Kurzweil's AI responded to it.

