
On Artificial Intelligence - ucha
http://aeon.co/magazine/technology/david-deutsch-artificial-intelligence/
======
dragonwriter
The argument that artificial intelligence requires understanding how
intelligence works is an argument that natural intelligence requires
Intelligent Design. (Its also an argument that fortuitous discoveries--such as
of pharmaceuticals with utility in treating conditions they were not designed
for and whose mechanisms we do not understand--cannot occur.)

Obviously, understanding intelligence better would promote more effective
_directed_ research toward artificial intelligence. But if we can _identify_
it (which the Turing Test is about), then it is quite possible that we can
develop it -- and know that we have -- without understanding it. (And it may
only be through developing it that we end up understanding how it works.)

~~~
dsacco
I upvoted you for the insightful reasoning but I disagree with your stated
premise that

 _> The argument that artificial intelligence requires understanding how
intelligence works is an argument that natural intelligence requires
Intelligent Design._

I think that statement makes sense when phrased that way, so it is an
attractive idea. However, I don't think it's true. From an evolutionary
standpoint, biological intelligence developed naturally because biological
components are natural. Furthermore, machines do not develop when left in
isolation, while biological organisms do. If you leave a large population of
simple machines running in an environment, this is overwhelmingly not likely
to result in a machine intelligence millions of years later. Machines were
developed by humans and do not develop in the same way; comparing the two as
in your Intelligent Design argument doesn't make much sense.

I do not think artificial intelligence can arise naturally because its
components are not natural. This discussion could also foster a discussion on
two other interesting questions:

1\. Does intelligence require organic components that operate in a
deterministic way (i.e. the brain) to transcend it from merely a "machine"?

2\. If intelligence requires biology, where do you draw the line between
creating an intelligence through natural human reproduction and creating an
artificial intelligence another way?

Personally, I believe artificial intelligence does not require biological
components, and I believe that under certain circumstances, it could develop
unintentionally from a relatively advanced computer, but that is not the same
as naturally.

~~~
dragonwriter
> Furthermore, machines do not develop when left in isolation, while
> biological organisms do.

Biological organisms _are_ machines.

Entities which meet the necessary requirements for Darwinian evolution (which
are, approximately, that they can pass on their traits from generation to
generation, have a source for variability, and have selective pressure in the
environment) change over successive generations, and all kinds of machines
change individually over time in response to interactions with the environment
(in fact, pretty much _all physical objects do_.) Biological organisms
obviously meet the requirements for Darwinian evolution (since that's where it
was first observed and described), but there's no magic "biological sauce"
required.

> I do not think artificial intelligence can arise naturally because its
> components are not natural.

The whole natural/artificial divide is unsound, since humans, and hence all of
their products, are part of and products of nature. Nothing not-natural
exists.

> Personally, I believe artificial intelligence does not require biological
> components, and I believe that under certain circumstances, it could develop
> unintentionally from a relatively advanced computer, but that is not the
> same as naturally.

The whole thing about artificial intelligence arising naturally is a strawman
you've constructed. You've just agreed with and illustrated my argument --
which is that artificial intelligence does not require understanding
intelligence.

~~~
dsacco
_> Biological organisms are machines._

No, I think we're arguing two different things here. I don't mean machines in
the sense of a deterministic mechanism, I mean machines in the sense of a non-
biological computer.

Machines do not have generations. Machines are not alive. I can simplify this:
we do not yet have computers which are alive, thus they fundamentally do not
operate the same way humans do in terms of evolution. This circles back to
what I said about leaving machines alone to develop in an evolutionary sense -
they won't, they'll die.

Machines do not yet improve themselves after a certain "age" of maturation.

The natural/artificial divide is _not_ unsound, because artificial means
something which is created by a human, and natural means something which is
not. There are definitions of these terms and philosophical schools of thought
that make this divide unsound, but colloquially, I don't mean those in this
context.

Here is the basic point I'm trying to make - human beings arose naturally with
no apparent intelligence to guide them into existence. Machines did not do so.
You can argue they did because "everything is natural", but that's not my
point here. They are fundamentally different from human beings and only exist
because human beings existed first.

You can't compare the relationship between an Intelligent Design and human
species and the relationship between machines and humans for this reason
because they are the opposite. Humans apparently have no guiding force that
opted to create them or guide their existence, whereas machines do - us.

That is what I mean by natural. I think an intelligence can arise naturally,
but only if it follows the natural conduits that would form conventional
intelligence, which is via biological organisms.

In order to replicate what happened through evolution using a _completely_
different "container" if you will, you'd need to be able to understand
intelligence completely, or at least enough to implement it.

Ultimately, intelligence never arises from non-organic components (at least
not on Earth). To make it do so where it would not ordinarily happen is what I
mean by _artificial_ versus _natural._ And to do that is to essentially
reverse-engineer intelligence itself, which would require understanding it.

I apologize if I'm still not being clear, but does this make my point any
better?

~~~
dragonwriter
You keep positing invalid distinction between biological and otger mechanisms.
There is nothing limiting generations to biological mechanisms. Consider von
Neumann's universal constructors or, for purely software mechanisms, the
agents within Avida.

We do, in fact, now have machines -- in the software sense -- that have all
features necessary for Darwinian evolution to operate. We don't yet have that
for hardware devices, but we're fairly close, and we certainly don't need to
understand intelligence do build such self-replicating hardware/software
systems (though the universality of computation suggests we don't need
hardware/software systems, since anything they can exhibit, pure software
systems can as well.)

The problem isn't that you haven't explained yourself well, it's that your
argument rests on a fundamental distinction between natural biological
organisms and all other machines which does not exist.

~~~
p1esk
I think you have to address the parent's argument that machines are not
"alive", while biological organisms are.

Of course, first we need to define "alive", and then we should ask: can we
build something that will be "alive"?

Do you consider the entities in Avida simulation to be alive? How about
biological vs computer viruses? Is there a fundamental difference between
them?

If we simulate a biological organism on a atomic level, together with its
immediate environment, so that it behaves exactly like its real-world
counterpart would, do we call it "alive"?

------
breuleux
I don't feel like the author is familiar with modern approaches to AI. For
instance, he mentions "creativity" as a stumbling block for AGI, but there is
a whole class of existing algorithms that exhibit prototypical creativity,
namely generative models.

Essentially, the idea is that "creativity" is the act of sampling from a
distribution over the kind of thing you are trying to create. Learning
algorithms like Boltzmann Machines will learn a distribution over the inputs
they see. One thing you can do with such a distribution is checking the
probability of a given input under it, which may be good to classify them.
Another thing you might want to do is generate _representative samples_ , i.e.
generate an example E with probability X iff P(E) = X under the model. The
latter is what I would call "creativity".

Under this definition, creativity depends on both the learned distribution
(which should only assign high probability to meaningful data) and of course
on the sampling algorithm. As it turns out, it is very hard to write good
sampling algorithms for non-trivial distributions (naive MCMC will often get
stuck). So creativity _is_ hard, but so are a lot of other tasks, so I don't
think it's fair to single it out.

~~~
chriskanan
I don't think generative models are sufficient to model the kind of creativity
that the author is thinking about. See this quote from the article:

 _The prevailing misconception is that by assuming that ‘the future will be
like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories
from repeated experiences by an alleged process called ‘induction’. But that
is impossible._

He is looking for solutions outside of the distribution that has been
previously observed.

~~~
breuleux
What the author is looking for _is_ a kind of induction. If I'm seeking a new
theory of gravity, I'm not going to start looking at the colors of hats. Why
not? Because I very strongly suspect it will lead absolutely nowhere. But if I
think some ideas have potential and some others don't, isn't that a
distribution of sorts?

Theories that have relevance or potential obey a certain distribution, and it
is this distribution that you are trying to sample from. Sure, theories may
not be _directly_ derived from the extrapolation of sense data, but they are
nonetheless derived from the extrapolation of theories in general. So what
you're looking for is not a fundamentally new paradigm, it's more like an
additional level of indirection. But there's no point to experiment with
multiple induction levels if we can't even make a single one work well enough.

~~~
pron
I agree that higher-order induction might very well suffice. But just to
clarify the author's argument on the point, our rational justification for
induction -- as you have just done -- cannot be inductive, because there
cannot be an inductive justification for induction, and therefore induction
cannot produce _knowledge_ (which must be justified).

~~~
breuleux
I don't think induction requires justification at all. I mean, if you do it
and it works, great! If you do it and it doesn't work, well, what else were
you supposed to do anyway? Ultimately, I use induction because I have no
better idea, not because I think it necessarily works.

Also, generative or inductive mechanisms have a wider scope of application
than prediction. They can be used to inspect your own belief systems and
pinpoint inconsistencies: the easiest way to know if your model of the world
is inconsistent is to generate ideas and examples that fit the model but
trigger contradictions.

~~~
pron
> if you do it and it works, great! If you do it and it doesn't work, well,
> what else were you supposed to do anyway?

But, as the author explains, epistemology doesn't work this way, and is
certainly not inductive (maybe high-order inductive -- whatever that means).
We don't treat physics as simply "something that works", but as knowledge
based on assumptions (codified in symmetry laws) which are not inductive by
any means. The laws of symmetry (assumptions, really) are a _justification_
for induction, but can't be a result of induction alone. In fact, all of
mathematics is a set of justifications for inductions that humans have
developed.

~~~
throwmeunder
Sorry but forming hypothesis based on observations, which is what assumptions
means to me, is induction. They are fallible so they need to be proven in
order to be accepted. Usually that's done by proof of contradiction.

But you don't need to prove them to use them. Many people go around believing
unproven theories. In the sciences the preference is for verified theories(and
theorems in math) and we prove them by deduction.

But it seems that you know that. I don't understand why you don't like
induction in a general algorithm. We don't have to restrict ourselves to a
single type of reasoning. Induction, deduction, abduction are all valid and
used by humans for generating new knowledge.

I might be misunderstanding something in which case please correct me.

~~~
pron
I think you've misunderstood :)

The article says that current research focuses on achieving intelligence by
means of induction alone, but induction cannot explain all of intelligence,
because we reason in ways that contradict induction (although maybe they're a
result of higher-order induction).

------
ForHackernews
This author makes a good case that current approaches toward producing AGI are
misguided:

> The Skynet misconception likewise informs the hope that AGI is merely an
> emergent property of complexity, or that increased computer power will bring
> it forth (as if someone had already written an AGI program but it takes a
> year to utter each sentence). It is behind the notion that the unique
> abilities of the brain are due to its ‘massive parallelism’ or to its
> neuronal architecture, two ideas that violate computational universality.

But I don't think he's done a very good job of supporting his assertion that
thinking (in the AGI sense) _is_ a computational process. The closest he comes
is:

> But that’s not a metaphor: the universality of computation follows from the
> known laws of physics.

That's it? Because physics?

------
ajuc
Completely missing the point of analogy, I know, but skyscrappers would fly if
they were tall enough, right?

The center of gravity far enough should make them sattelites, if they were
made from something that can withstand the forces involved?

~~~
fchollet
Your "skyscraper" would need to have its top in geostationary orbit, or else
it would be wrapping up around the earth as its top would be in LEO or HEO and
thus would be moving at a different speed than its base. I.e. your skyscraper
(or cable) would be crashing down.

So that's a space elevator. But can a space elevator be said to be "flying"?
It stays above a single spot the whole time.

~~~
bjornsing
BS. ;)

First of all before a (hypothetical) "skyscraper" could fly it would have to
have its top beyond geostationary orbit. How so? Because a skyscraper is
geostationary and if its center of mass is below geostationary orbit then it's
simply not orbiting. It doesn't have enough kinetic energy.

Secondly, there's no law that an object revolving around earth must do so at
"orbit" speed/altitude. You and I are perfect examples of that: we're going
far too slow. Only consequence of that is that earths gravitational pull must
constantly be counteracted by a supporting force. If an object were to revolve
around earth at a speed faster than "orbit" that's also fine, as long as
there's a force keeping it down.

So, no, this "skyscraper" could have its top well below geostationary orbit as
long as it rests firmly on mother earth. Once its center of mass is beyond
geostationary orbit though it would need to be held down.

------
soup10
tl;dr Human intelligence is special. Computer accomplishments like being good
at chess is not real intelligence. In fact anything computers ever do is not
real intelligence, because they aren't like us and possibly never will be
because there is some divine truth and wisdom that only humans posses.

How many times do we have to hear these arguments to realize they are just hot
air? Either computers are capable of intelligence, or they are not. The answer
to that question depends entirely on how you define intelligence. If you
define it as the set of things humans are capable of and computers are not,
then the answer will always be no. But just like humans, computers learn and
are taught new things every day, and as time goes on the set of things human's
are uniquely capable of grows smaller and smaller.

~~~
java-man
Jeff Hawkins defines intelligence as the ability to predict. I think it's a
good definition: it allows us to _measure_ intelligence.

~~~
thom
I've not read much about this definition, but it seems that a problem with
focusing on the ability to predict is that to get useful work done, you have
to reduce everything to a decision problem. You have an ideal fitness
function, but how are you generating ideas, plans or theories to evaluate?

I personally prefer (although it's certainly less measurable) Ben Goertzel's
definition - the ability to achieve complex goals in complex environments.
It's still wishy-washy enough for people to write off any given achievement of
AI though, I guess.

------
dcre
> Unfortunately, what we know about epistemology is contained largely in the
> work of the philosopher Karl Popper and is almost universally underrated and
> misunderstood (even — or perhaps especially — by philosophers). For example,
> it is still taken for granted by almost every authority that knowledge
> consists of justified, true beliefs and that, therefore, an AGI’s thinking
> must include some process during which it justifies some of its theories as
> true, or probable, while rejecting others as false or improbable.

This is where I gave up. Deutsch dismisses thousands of years of thought in a
sentence. Not to mention that "justified true belief" is a phrase you find
much more often in a textbook or an encyclopedia article than in a real work
of philosophy.

------
robohamburger
The idea that we ought to have a better philosophical underpinning for AGI
makes a lot of sense. Unfortunately the author blows past this and starts
making a lot of tortured claims that don't entirely make sense.

The example of years that started with 20s seemed quite odd. Given the easiest
way to understand numbers at least for me is inductively. Of course if you lop
off a bunch of information such as the digits after 20 or 19 it sounds like an
impossible problem.

------
bitdiddle
Brian Eno makes a good argument[1] that it's already here.

[1] [http://edge.org/responses/q2015](http://edge.org/responses/q2015)

