
What Darwin's Theory of Evolution Reveals About Artificial Intelligence - llambda
http://www.theatlantic.com/technology/archive/2012/06/a-perfect-and-beautiful-machine-what-darwins-theory-of-evolution-reveals-about-artificial-intelligence/258829/
======
calinet6
Just one comment on a minute quote from Penrose that happened to be in the
article:

 _"To my way of thinking there is still something mysterious about evolution,
with its apparent 'groping' towards some future purpose. Things at least seem
to organize themselves somewhat better than they 'ought' to, just on the basis
of blind-chance evolution and natural selection."_

This is a common fallacy about evolution, and is explained beautifully by the
anthropic principle, or in other words, the innate selection bias of our
existence. We've self-selected for our own awareness of our circumstance and
existence. Things are not organizing better than they "ought" to, they've just
happened to organize to a sufficient point that we exist and perceive this
process and say things about it like the above quote.

It is in the same way that someone who wins the lottery must think themselves
exceedingly lucky that they, of all the millions of people participating, have
won. They must think there is something mysterious about this, that things
turned out somewhat better than they 'ought' to, just on the basis of blind
chance.

Yet, what is the probability that some person, of the entire pool of people in
the world, wins the lottery? One. It has necessarily happened by the nature of
the lottery.

We as a species have won this lottery, by the mere nature of our sentience. We
should not think it mysterious or unusual in any way. However, we are lucky in
the sense that we are here; we are special in that we can perceive and
understand. As long as we understand the fact that there is no "should" in
evolution, this is a perfectly fine thought. it just happened, and on this
planet, it produced something able to understand itself. As Carl Sagan said,
"We are a way for the cosmos to know itself." Certainly there is much
metaphysical and philosophical consequence to our existence, but
scientifically and probabilistically speaking, it makes perfect sense.

Consequently, I believe it may be much more difficult to reach true AI than
some have postulated.

~~~
ehsanu1
_This is a common fallacy about evolution, and is explained beautifully by the
anthropic principle, or in other words, the innate selection bias of our
existence._

I'm a little confused about how the anthropic principle explains this fallacy
(and it is a fallacy to be sure). I guess you're referring to how natural
selection (and all the other sorts of selection that occur regularly) lead to
the existence of "appropriate" organisms and features.

But I think knowing some of the actual details in how evolution works at all
the different levels (molecular bio, epigenetics, competition/cooperation,
behavior, etc, etc, etc) serves as an even better explanation. Here's a 25
lecture course from Stanford that explains just that:
<http://www.youtube.com/watch?v=NNnIGh9g6fA>

~~~
FreakLegion
I'm going to take a few seconds here to be a pedant and disentangle the
structure of an argument from its contents.

Strictly speaking, a fallacy is a defect in the structure of an argument, i.e.
in the logic that connects a premise to a conclusion. What you and the parent
have identified is simply a false premise ("Things at least seem to organize
themselves somewhat better than they 'ought' to"[1]). A false premise may
result in the wrong conclusion, but an argument can be _wrong_ without being
_fallacious_.

This distinction isn't so much relevant here, but it comes in handy when you
want to pinpoint sources of disagreement in a discussion. Lobbing the f-word
around (even when deserved) just makes people defensive.

1\. Of course the argument used to derive this premise might be fallacious,
but that argument isn't given.

~~~
ehsanu1
Thanks for making the distinction, but the word "fallacy" can also be used
more loosely than the way you define it. Googling "fallacy definition" gives
me:

    
    
        1. A mistaken belief, esp. one based on unsound argument.
        2. A failure in reasoning that renders an argument invalid.
    

We were using it in the first sense.

------
feral
I recently watched: <http://www.youtube.com/watch?v=Uoda5BSj_6o>

in which Eliezer Yudkowsky talks about challenges in making 'friendly' AI.
Yudkowsky draws a lot of examples from evolution to show how a system blindly
optimising for an objective function can result in all sorts of things that a
human specifying the objective function wouldn't ever have expected.

The linked article at one point says: "In order to be a perfect and beautiful
computing machine, it is not requisite to know what arithmetic is." After
watching Yudkowsky's talk, which I found very insightful, I got the impression
that it could be _very difficult_ to make a machine that wouldn't do _very
unpredictable_ things, unless you had a very deep understanding of exactly
what you were telling it to do.

The talk might be of interest.

~~~
_delirium
That's definitely true, and a major problem beyond friendliness (i.e. way
before any singularity). A traditional assumption in machine learning and
other areas of AI is that you can factor out the problem-specification parts
from the algorithmic problem-solving parts, but it's _very_ easy to get
results that are unexpected, except in the rare cases where you have an
100%-correct obvious objective function handed to you by the structure of a
problem. In cases where a human is writing down what they think the objective
is, a very common case is that there are a lot of unspoken things that they
intended to be included, "optimize [x], but without doing anything obviously
stupid that I wouldn't want you to do". Hence a lot of iteration on objective
functions is needed in real-world applications, and recent-ish research
focuses on alternative formulations like interactive machine learning,
preference elicitation, etc.

~~~
adrianN
For an example of this have a look at this essay [1] by George Danzig in which
he tells the story of how he tried optimizing his diet using linear
programming.

[1] <http://dl.dropbox.com/u/5317066/1990-dantzig-dietproblem.pdf>

~~~
goostavos
Interesting read. Thanks a lot!

------
drostie
I should like to warn people of this sentence: _'If the history of resistance
to Darwinian thinking is a good measure, we can expect that long into the
future, long after every triumph of human thought has been matched or
surpassed by "mere machines," there will still be thinkers who insist that the
human mind works in mysterious ways that no science can comprehend.'_

You might think, by his preceding paragraph, that this applies to all of his
"trickle-down theorists". It emphatically does _not_ apply to Penrose or John
Searle -- two of the three he names explicitly. (I'm not even sure that it
applies to Descartes.)

So, let me just summarize those two, for people who aren't so familiar with
their writings. Penrose is a physicist who thinks it peculiar that we can
mathematically prove, for any computer, _here is a true fact which that
computer cannot prove._ He thinks this is peculiar because if _our
understanding_ were algorithmic, then that would imply that we could
eventually write down its axioms as a formal system, go through the Godel
proof line by line, and prove, "here is a true fact which my understanding
cannot prove," thereby using our understanding to prove that fact, thus
constructing a contradiction. His conclusion is that understanding can't be
algorithmic. (Dennett of course disagrees.) Nonetheless, he believes in
science. He thinks that we need to understand existing science (quantum
mechanics and perhaps gravity) better to comprehend consciousness, but he
certainly seems to believe that science will eventually comprehend it.

Searle is a philosopher who finds it peculiar that functionalists (like
Dennett) still seem to quietly accept the Cartesian counting of substances.
That is, they seem to accept that consciousness is somehow a 'thing' which is
distinct from the stuff of the world, and therefore they must argue that such
'things' don't really exist. Searle wants to say, first and foremost, that we
really _are_ conscious when we wake up in the morning -- all that touchy-feely
crap that you experience until you go to bed at night isn't some sort of
_illusion_ , it really is a part of the world and does affect the world, etc.
Again, he views it as a mistake to think that it's somehow a 'second
substance' which cannot be reconciled with 'material substance' -- they're
both parts of the physical world and whether you treat them separately says
more about you than it does about the world.

He also wants to note that it (consciousness) is not a computation. This is
for a very simple reason: whether a box is performing a computation is
_observer-relative_ , given Turing's definition of a computer as a symbol-
shuffler: an observer who doesn't interpret the symbols 'properly' doesn't see
it as 'computing' anything. But whether _you_ are conscious is presumably
observer-independent; there is no perspective that I can switch to by which I
might somehow obviate your consciousness and thereby render you incapable of
feeling, for example, pain. Many of Searle's articles contain the attitude of:
I'm going to clear away the philosophical problems for once and for all and
then the scientists can do the hard part of figuring out what neuronal events
correlate with consciousness and all of that stuff, so that we can understand
consciousness. In that sense he certainly believes that science will
eventually understand consciousness.

These two might not even agree that "a thinking thing cannot be constructed
out of Turing's building blocks." If Dennett's point is "sorta-thinking" as a
model for thinking, then they might agree that a program can sorta-think.
Penrose is merely skeptical that sorta-thinking can climb as high as
understanding, while Searle is skeptical that sorta-thinking can rise to the
observer-independent nature that we consider a fundamental part of our
everyday experience.

~~~
endtime
>He thinks that we need to understand existing science (quantum mechanics and
perhaps gravity) better to comprehend consciousness

What, not phlogiston?

>That is, they seem to accept that consciousness is somehow a 'thing' which is
distinct from the stuff of the world, and therefore they must argue that such
'things' don't really exist.

Wait, what? Searle claims that functionalists are dualists? And that therefore
they must claim not to be? This makes no sense.

>This is for a very simple reason: whether a box is performing a computation
is observer-relative, given Turing's definition of a computer as a symbol-
shuffler: an observer who doesn't interpret the symbols 'properly' doesn't see
it as 'computing' anything. But whether you are conscious is presumably
observer-independent; there is no perspective that I can switch to by which I
might somehow obviate your consciousness...

If I write a program that computes the first N squares, and you don't
understand the symbols, the computation has nevertheless happened. The claim
that Turing computation is observer-relative is equivalent to claiming that if
I don't know how x86 works, then my computer won't boot. Of course it will
(and likewise, of course, whatever perspective you switch to has no effect on
my consciousness).

~~~
Strilanc
>>He thinks that we need to understand existing science (quantum mechanics and
perhaps gravity) better to comprehend consciousness

> What, not phlogiston?

To be fair, more research really is needed. We don't have an underlying theory
to explain the facts we know about consciousness. Yet.

We don't even know if a simulation using the _known_ laws of physics would
include the _effects_ of consciousness, let alone _be_ conscious.

~~~
benpbenp
What are the effects of consciousness, exactly? Is there a procedure to
determine whether a given human possesses consciousness as you yourself, do?
Is there a procedure to determine whether an AI possesses consciousness as you
yourself, do? Is there a way to disprove the solipsist? This is the nub of the
problem: there really isn't.

Consciousness does not present itself as just another thing in the world --
_at all_. It is the unique medium through which _you_ experience everything
_you_ have ever experienced. It is the fact that the universe exists from a
perspective. You assume that the universe also exists from _other_
perspectives, but there is no objective basis for this belief -- in the sense
that there are no phenomena solely attributable to true consciousness.

~~~
Strilanc
The fact that we are having this conversation is a measurable effect of
consciousness.

Consciousness interacts with the world and causes bodies to do different
things. This is a measurable effect. A p-zombie in an unconscious universe is
less likely to talk about "qualia" than a conscious being.

Hopefully we will discover some simpler effects to measure in the future, but
for know we're stuck with "people sure talk about it a lot for something that
doesn't exist...".

~~~
benpbenp
I agree that humans discuss consciousness and qualia because these things
really exist. So yes, in my philosophical opinion, this is an effect of
consciousness on the world. However, first, this phenomenon is not guaranteed
to occur with all conscious subjects (MOST humans do NOT discuss these
things), and second, the phenomenon can also occur with unconscious subjects
(You could be an unconscious program designed to win arguments on the
internet). The phenomenon is neither a necessary nor a sufficient indication
of Consciousness, and it can't be understood to have any scientific value for
any future objective understanding of Consciousness.

My position is even stronger though: I am quite happy imagining honest
p-zombies and AIs discussing consciousness, apparently as if they themselves
possessed it, but without any internal contradiction. We (you and I)
understand each other when we each say "consciousness" and "qualia" but it is
only because these concepts have a privileged place in our intellects-- we
experience them directly (what's more, they are the stuff of our very
experience of anything at all). Now, a p-zombie has no predisposition toward
understanding these concepts, because it has no subjective experience of its
own. How do I explain these concepts to the p-zombie then, in a manner that
will force it to say, "Oh, right, no, I don't have consciousness." I contend
it can't be done, that any attempt at a definition can always be misunderstood
to apply to objective phenomena of the human brain/mind/person apart from what
we know as true consciousness, and this is how a p-zombie or AI will always
understand it.

~~~
Strilanc
I think we agree about consciousness, but you have a stricter constraint for
"having scientific value".

------
zitterbewegung
Instead of reading this persons take on it I would much rather read the
original paper. (1)

I don't see how darwinism is even related at all. Penrose likes to say that
there is a fundamenal limit through either a quantum process or computers
can't comprehend godel's theorem (2) .

Aaronson in his quantum computing from Democratius course says that if our
brain is a quantum computer it isn't very good at taking advantage of it. (3)

Personally I think we don't have good abilities of understanding godel's
theorem (one of Penrose's arguments) . Our pattern recognition is basically
learned through induction and it just increases its optimization.

1 <http://orium.homelinux.org/paper/turingai.pdf>

2 <http://en.wikipedia.org/wiki/The_Emperors_New_Mind>

3 <http://www.scottaaronson.com/democritus/lec10.5.html>

~~~
SeanLuke
In my opinion Penrose has been so wrong for so long about so much of
artificial intelligence and cognition that it's hard to take him seriously any
more.

~~~
Evbn
Classic case of a visionary genius crossing the blurry line from physics to
metaphysics.

~~~
jlgreco
I am reminded of the Nobel Disease:
<http://rationalwiki.org/wiki/Nobel_disease>

------
6ren
I was very impressed on learning that Turing proved that a turing machine
could compute any computable number... but then I started to wonder how one
could prove such a thing. It seemed you'd need a definition of "computable"
and a definition of "turing machine" and then show its equivalence. But how
could you define what was "computable", and show it was true? That's the nub
of the problem and it seemed impossible to me. Eventually I read his paper,
and it turns out he thought so too:

    
    
      9. The extent of the computable numbers.
        No attempt has yet been made to show that the “computable” numbers include all
      numbers which would naturally be regarded as computable.  All arguments which can be
      given are bound to be, fundamentally, appeals to intuition, and for this reason
      rather unsatisfactory mathematically.
    

_On Computable Numbers, with an Application to the Entscheidungsproblem_
[http://www.comlab.ox.ac.uk/activities/ieg/e-library/sources/...](http://www.comlab.ox.ac.uk/activities/ieg/e-library/sources/tp2-ie.pdf)

~~~
sp332
Most numbers are uncomputable because most real numbers are infinite strings
of non-repeating digits, which obviously would never fit in a computer. More
interesting non-computable numbers include the Busy Beaver problem
<https://en.wikipedia.org/wiki/Busy_beaver> There doesn't seem to be a way to
compute these numbers without basically trying out every possible answer.

~~~
Xcelerate
But is the universe even modeled by real numbers or is this solely a concept
of the mathematical domain?

Hawking calculated the Bekenstein bound, which proved an upper bound on the
amount of information that can be contained within a given volume. As black
holes are maximal entropy objects, one can compute the entropy for a certain
black hole volume and equate that with Shannon entropy to determine the number
of bits. This suggests, to some degree, that the fundamental particle is the
bit.

(This also proves that a true Turing machine with unbounded memory is not
physically possible.)

~~~
sp332
This whole thing is in the "mathematical domain". The point is that you can
define a number in such a way that it clearly exists, but there's no good way
to calculate it.

------
domador
The analogy drawn between evolution and artificial intelligence is
interesting, yet ultimately deficient in at least one key aspect. Yes, the
steps performed by a computer are unintelligent, just like mutation and
natural selection within evolution. However, unlike the process of life
according to evolution, computers are not purposeless or directionless.
Computers were designed and imbued with a purpose (computation) by intelligent
beings, in contrast to evolution’s “blind watchmaker”. Yes, unintelligent
computers can perform intelligent processes (for a particular definition of
intelligence). Yet it remains up to intelligent beings to judge the final
output produced by computers: whether such output makes sense, is useful, or
is true. Even if eventually a computer designed by another computer (or a
lineage of computers) manages to pass the most brutal of Turing tests, you
can’t get away from the fact that such a computer ultimately owes its
direction, purpose, and intelligence to the intelligent beings who put
together its oldest ancestor.

For an apt analogy between AI and evolution, a computer would need to exist
and compute without any involvement from intelligent beings at any point in
its existence. That computer would need to develop its direction and processes
in absolute, unintelligent independence. We couldn’t develop such a computer
ourselves; we’d need to discover it somewhere in the natural world (perhaps
lying within a mineral deposit). However, at this point we’d be drawing an
analogy between evolution and another form of itself. Circular analogies
aren’t very useful.

The author writes, “in order to be a perfect and beautiful computing machine,
it is not requisite to know what arithmetic is.” This is true, but knowledge
of arithmetic (or a higher form of reasoning) is required in order to produce
such a machine in the first place. At least that’s what we can tell based on
the only computers available to us (all of which owe their existence to
knowledgeable beings).

------
hcarvalhoalves
I believe the comparison the author makes between Turing an Darwin is valid
and applies, in general, to any system really.

That is, it's very hard to describe a system, top-down, as the state of the
whole thing at a particular point in time, but it's more feasible to approach
a description as the rate of change of all the smaller systems contained
within.

In other words: it's hard to describe a computer, but it's easier to describe
it as a series of computations (Turing machine); it's hard to describe "life",
but it's easier to describe it as a series of processes over time (evolution).

It's a way of thinking.

------
evincarofautumn
With a powerful enough computing system, we could just run an artificial world
and hope intelligence evolves within it.

~~~
PurplePanda
Given that you would also be simulating a lot of unintelligent behaviour, how
would you discover whether or not the simulation has been successful? I mean
how would you find which parts of your system are the intelligent parts?

~~~
jlgreco
Perhaps you could detect when they start to preform science. Put them one a
rock, then a few hundred thousand kilometers away place another rock, this
time sterile, and seperate them with a vacuum. If living goop gets to the
sterile rock, chances are you would be justified in calling it intelligent.

(Yes, I did just recently watch 2001. Sue me ;p)

------
Xcelerate
As someone who has a keen interest in reading about evolution and artificial
intelligence, I'm a bit disappointed they had to bring an anti-God slant to
the article.

Other than that, I certainly believe it's possible to generate something
"intelligent" (for lack of a better word). I think this will require an
improvement in both processing ability -- and more importantly -- an
innovative new way to create a mutation algorithm. If we just take bits and
let them mutate over time, we may eventually get something like AI but I think
it will take more time than we have. We'll need to use our own intelligence to
accelerate this process.

~~~
noonespecial
I don't think its "anti-God" per se. Its just that as science advances there's
more and more that we can say about the universe that doesn't _require_ God
(or philosophy) to talk about.

Unfortunately the early western judeo-christian line of thinking biased people
heavily towards a "God of the gaps" style of thinking. In this line, God is
first postulated as an explanation for unexplainable observations and then
later falsified. This makes science seem "anit-God" just by its nature, since
it always seems to be trying to un-gap the God parts. IMHO this philosophical
rabbit trail was one of Christianity's biggest mistakes.

~~~
Xcelerate
Yeah, I can agree with that. In fact, I would be surprised that an omnipotent
being would be "hiding in a crevice, just waiting to be found" by humans. It
seems too much like a story. I prefer the idea of absolute physical laws that
were well- _designed_ , and from these laws everything else came about.

------
grassclip
Very interesting. Neat look into how humans are just made up of layers that
perform a certain task. The layers themselves don't know the overall structure
or results of their actions, but their combination creates the human. The
parallel between humans and computers in this regard has been widely noted,
but I hadn't made the connection between the evolution of humans that led to a
more complex and thinking organism and possibly using this method to foster
artificial intelligence.

------
samhan
Could someone please explain what this article is trying to convey . I
understand that the main thrust is that intelligence or comprehension can
emerge / evolve as an end result of simpler interacting processes that are not
necessarily intelligent. This is is analogous to how evolution builds complex
systems but the "algorithm" for evolution is nt so complex . So are they
implying that strong AI is definitely possible ? Or are they saying something
beyond that ?

------
dsirijus
Isn't this basically a stance that Wolfram argues with his cellular automata
as basis?

~~~
Xcelerate
It's interesting you mention that. Gerard 't Hooft came up with a way (a
loophole) that the universe could still be entirely deterministic using the
cellular automata model. Most physicists disagree with this view, but what
makes it interesting is that 't Hooft is a Nobel laureate.

Here's his paper: <http://iopscience.iop.org/1742-6596/67/1/012015/>

------
briandear
Darwin's theory was the Theory of Natural Selection, not the theory of
evolution. I wish people would get it right.

~~~
raldi
What's the difference?

------
nacker
"In order to be a perfect and beautiful computing machine, it is not requisite
to know what arithmetic is."

I was just thinking yesterday that a similar thing might be true for human
beings. We constantly strive for self-knowledge, what we really want, what we
really need, but what if a smoothly functioning "self" is dependent on NOT
knowing exactly how our minds work?

I'm just going to give up trying to understand myself as a complete waste of
time. So far, it feels liberating. YMMV.

~~~
nacker
“Voici mon secret. Il est très simple: on ne voit bien qu’avec le cœur.
L’essential est invisible pour les yeux.”

[http://www.economist.com/blogs/prospero/2012/06/quick-
study-...](http://www.economist.com/blogs/prospero/2012/06/quick-study-
satoshi-kanazawa-intelligence?fsrc=scn/tw/te/bl/thedisadvantageofintelligence)

"Le coeur a ses raisons que la raison ne connaît pas"

------
username3
_We may now construct a machine to do the work of this computer._

Creation.

