
David Deutsch On Artificial Intelligence - thewarrior
http://aeon.co/magazine/being-human/david-deutsch-artificial-intelligence/
======
nzp
I can just say that Deutsch is another victim of what being good at
theoretical physics tends to do to one's mind. The amount of intellectual
hubris and arrogance we can develop is staggering. Sheldon Cooper is not
really a parody, it's what many theoretical physicists actually think (but are
to socially adapted to say out loud), as in: "Penny - I'm a physicist. I have
a working knowledge of the entire universe and everything it contains."
Deutsch may think he knows things about "the mind" but he shows in this
article that, like many who like to speculate wildly about the mind, he's
basing his claim mostly on armchairing about how he _feels_ about certain
phenomena. Things he says about behaviourism are indicative that he doesn't
_really_ know what he's talking about. Or he does but is extremely bad at
communicating his ideas. It would just take too long to comment on each and
every empty claim this article makes. It's mostly a rehash of same old boring
arguments about how special we are, this time because of our "creativity".
Yawn, there is nothing special about "creativity", it's an active area of
research, has been for decades, and it's actually very boring. He does make a
few good points, like the one that quantum phenomena do not give rise to
mental processes, but that is trivial in the sense it's obvious to anyone with
some knowledge of quantum mechanics on one hand, and what neurons are and how
they work on the other.

~~~
ArbitraryLimits
Speaking as someone with a physics degree, I like to quote Peter Thiel (for
once):

> This is why physics Ph.D’s are notoriously difficult to work with; because
> they know fundamental things, they think they know all things.

\- [http://blakemasters.com/post/22866240816/peter-thiels-
cs183-...](http://blakemasters.com/post/22866240816/peter-thiels-
cs183-startup-class-11-notes-essay)

~~~
mathattack
Perhaps also why physicists can be dangerous in Finance. They have all the
math skills, but they can also have a little too much faith in those skills.

------
jorleif
"only a tiny component of thinking is about prediction at all"

Here I really struggle to follow his argument. It seems, he is saying that
thinking is not prediction, but how does he know that? As I see it, we don't
know what thinking is, so is he then saying that thinking does not feel like
prediction, and therefore it cannot be prediction? It is well known that we
are only aware of our very highest level of thought. What goes on below the
surface may very well be very much about prediction without us knowing about.
When we think about "the world" we must have some internal representation of
the configuration of objects that we think about. In my opinion this is
implicitly a kind of prediction, because we create (at least mostly)
configurations that are plausible, if not true (counterfactual, but possible).

It seems to me that creativity is mostly about finding good approximate
solutions to hard (e.g. NP-complete) problems. How we do it is so far unknown,
but it seems that it is not in principle so very special. The explanation for
the calendar he speaks about is actually not very complicated inference at
all, and is probably in reach for artificial systems. All he has to do is to
generalize 19 as belonging to a number category, and then apply the predictive
properties about how numbers work. Not all prediction is naively predicting
that the world is as it was before. If change is predictable in the past, then
change can also be predicted to the future. Prediction is difficult, but that
does not make it less important.

~~~
senorgomez
Is there a difference between reasoning and predicting? It seems you don't
think so and the author does.

~~~
jorleif
I think there is a difference, but my view is that it is impossible to reason
without sufficiently accurate prediction, and that the concepts we use for
reasoning are largely formed based on how predictive they are. Consider
playing a game where two players throw dice, and the one with the larger
number wins. There is no predictability between turns, so there is no point in
reasoning far into the future, unlike in chess, where the positions change
slowly, one piece per turn and in predictable patterns. In chess you still
don't know what the other player will do, but you can reason about it, because
there is a level of predictability.

I would define reasoning as answering a question or optimization problem, e.g.
What is my best move? What is a good move? This can be formalized as
calculating some kind of score for a possible action, whereas prediction is
just about assigning a probability value to an event.

~~~
Double_Cast
Bad example. I can predict the dice game's problem space. E.g. for a single
normal die (1-6), I can predict that I'll never observe a 7 [P(7) = 0%]. I can
also predict the probability distribution. E.g. for two dice thrown together,
the probability of throwing a sum of 7 is 1/6, whereas the probability of
throwing snake eyes is only 1/36\. Yes, the game's output is random. But what
you're talking about is control, not prediction. If random meant "impossible
to make predictions about", then statistics wouldn't exist.

Regarding winning, there's no way to improve one's chances (besides
cheating?). But merely knowing that the game is predicated solely on chance
can be useful for other applications. E.g. realizing that the lottery is a
scam.

Personally, I'm in that camp that says reasoning is a instrumental value
towards prediction.

------
murbard2
How on earth is Bayesianism a form of behaviorism" ? And what is it with the
cult of Popper. Popperism isn't "underrated", it's considered the gold
standard of epistemology despite being a vague philosophical theory full of
holes. Contrast that with Solomonoff induction which has a rigorous grounding
and which offers hypothesis testing and rejection as a trivial subcase.

~~~
enord
How is Bayesianism not a form of behaviorism? The essay devotes a paragraph to
explain this point allthough Deutsch assumes the reader is familiar with
behaviorism and probabilistic methods used i AI.

~~~
jorleif
Had he said that Bayesian decision theory with optimization of utility
functions etc. was behaviorism, then I would agree, but Bayesian formulations
can be used for many other things as well. At least one can propose very
complex internal models, which behaviorists were criticized for not
considering.

~~~
murbard2
Agreed. Besides, behaviorism was abandoned in psychology because it was crazy,
not because there's something wrong with reinforcement learning.

It was treating the human mind as a black box trying to pretend we have no
insight in human thoughts. It was trying to avoid antropomorphizing _people_
as Morgenbesser put it. In fact, quite the opposite, I'd attribute this
blatant idiocy to Popperism. After all, introspection isn't testable or
falsifiable in a popperian sense. There's no room for the concept of models in
Popperianism. There are just big black box ideas that you test and are
falsified or not.

------
tim333
There's a certain amount of twaddle in Deutsch's arguments in general but it's
quite hard work wading through the argument to see what's wrong. An iffy
statement of his here is "the field of ‘artificial general intelligence’ or
AGI — has made no progress whatever during the entire six decades of its
existence." Which is tricky to totally dispute because of his vagueness as to
what AGI is. But let's assume he means thinking like a human. Well fair enough
no computer thinks like a human yet but a lot of progress has been made in
that direction such as Watson in verbal processing and the Google cars in
geospatial awareness. So the argument is kind of misleading. I find his
arguments in physics similar - basically not much good but written in a form
that is hard work to dispute.

------
rdtsc
> So, why is it still conventional wisdom that we get our theories by
> induction?

I thought the other conventional wisdom is that we get our theories by
building mental models. I throw a ball, and I can either remember how it flew
in the past and use that to predict, I can also build a mental model from
first principles to find out what will happen. Maybe run a short 5 second
idealized simulation in my head and I can predict what might happen. Well then
I might realize that there was wind that my original model was too idealized.

Another example someone on HN (or an article posted to HN) talked about how
would they be able to predict what happens to a pen dropped on the moon. And
why did Astronauts stood firmly planted on the moon's surface. It was
interesting how many people got that wrong. And the difference was that some
seemed to have been reasoning by analogy or have encoded faulty facts (gravity
happens on earth), some by past experiences (they have heavy boots), some that
have the correct understanding of first principles can so to speak simulate
and understand what would happen -- the pen would drop to the moon. Moon has
some gravity. So do other planets. And so on.

Now most of these are still based on some principles that we hold true and are
"like the past". Gravity is like yesterday, the fact that winds affects
objects in flight is like yesterday, speed of light is like yesterdays. There
needs to be wide set of known and predictable rules about the world and then
deriving or simulating the future can be done.

There can be something in between, maybe if I built a mental model just
recently it gets memoized and instead of re-running it I just remember that it
is like the "past one" but with a small tweak.

Another mechanism is the "seeker of inconsistencies". Something that
understands that these facts together don't make sense. So if I first thought
the pencil would float away on the moon and then thought but Astronauts are
planted firmly on its surface. This would, so to speak, raise an exception and
say "aha, something is not consistent, pay more attention to this one".

But, that is how my brain works. Others brains might work differently. I can
inspect to some degree my own thought process but I cannot get into the head
of someone else and "see" how they think about things. Do they visualize, to
the see words and formulas instead and plug values is. Are some completely
unaware of how they think and lack much self reflexive.

I am papering over and being too simplistic here and don't necessarily
disagree with the article, that just that reaction when read that paragraph.

~~~
ansible
_I thought the other conventional wisdom is that we get our theories by
building mental models. I throw a ball, and I can either remember how it flew
in the past and use that to predict, I can also built a mental model from
first principles to find out what will happen. Maybe run a short 5 second
idealized simulation in my head and I can predict what might happen. Well then
I might realize that there was wind that my original model was too idealized._

My current understanding of how sports skills are developed is closer to what
you first said, than the bit about 'running a simulation'.

The pro ball players have practised enough that they've seen the ball coming
in at this angle, at that angle, and with this wind condition and that wind
condition. And so they 'remember' where the ball is going to land, and they
remember how to move to get there.

Our brains are not, as far as I know, running any kind of recognizable
simulation like we would with a computer.

~~~
rdtsc
Interesting. I do visualize in my head how kinetic energy depletes as it goes
higher and potential energy increases. It is like a bubble that shrinks. I see
motion vectors and how they change.

Other examples are a conflict between 3 people, kind of see a graph of
relationships with different colored arrows and how they change.

~~~
ansible
_Interesting. I do visualize in my head how kinetic energy depletes as it goes
higher and potential energy increases. It is like a bubble that shrinks. I see
motion vectors and how they change._

I do similar sorts of things... it is excellent that you have trained yourself
to think that way. But for time critical tasks such as catching a fly ball,
that doesn't work so well. We have lots of different processes we use for
different situations.

------
lukev
> Despite this long record of failure, AGI must be possible. And that is
> because of a deep property of the laws of physics, namely the universality
> of computation.

This actually doesn't follow. It is logically _possible_ that general
intelligence is the result of a non-physical process (or at least a process
outside any conception of known physics). It could be, as philosophers have
put it, a homunculus that interfaces with but is not a part of the physical
brain.

Or take the brain-in-the-vat hypothesis. It could be that
intelligence/consciousness is the result of some process that can't be modeled
_within_ the physics of the simulation, and can only be injected from
"outside" the system. This already happens, actually; there are demonstrably
intelligent entities in World of Warcraft (the human players) but that doesn't
imply that the computer physics of the WOW game engine are sufficient to model
intelligence.

~~~
indrax
If these were true, It would mean that atoms in our brain violate physics as
we know it. This source of intelligence would be detectable, because it is
significant enough for neurons to detect it.

The hidden premise of the quote is that that we have observed brains enough to
be confident that they are running on physics.

~~~
htns
The current theories of physics do not even begin to offer an explanation for
"subjective experience"/consciousness, which definitely has everything to do
with the brain.

------
woodchuck64
"or that increased computer power will bring it forth (as if someone had
already written an AGI program but it takes a year to utter each sentence)."

No, 1000 years would be more like it, if you add in the years it takes to
learn language.

Here's my very quick and dirty calculation:

A rough estimate of brain data throughput would be 100 billion neurons x 200
firings per second x 1,000 connections each = 20,000,000,000,000,000 bits of
info transmitted per second. 20 million gigabits (20 petabits) of information
move around your brain every second.

The biggest/fastest numbers I can find for an IC is going to be about 3
billion transistors x 8Ghz x 3 (connections per transistor) =
6,480,000,000,000 = 6.5 Terabits per second.

20 petabits -vs- 6.5 Terabits, that's roughly 3 orders of magnitude
difference. If someone wrote a true AGI today, it may well still take 1000
years to do what humans can learn and accomplish in one year (assuming moving
bits of data is the key to AGI).

------
davidjhamp
Not to be nitpick but is it pretty well proven that there are other self-aware
animals.

------
jmagoon
There's something even more fundamental about his own thought process than
creativity, which is odd that he doesn't see -- namely, "Why does David
Deutsch care about creating an AGI?".

Why is there a drive towards creativity / creation at all? So, creativity is
still merely a simulation of a top level human function--you could probably
program something to be creative, but how about /wanting/ to be creative? How
about not wanting to be creative? How about being obstinate, or bored?

------
vinceguidry
I think you could get pretty close to general intelligence by modeling three
properties in a computer program. Each of these is difficult, but not outside
the realm of possibility.

\- Awareness of environment, including self. It would need to be able to re-
program itself

\- Ability to form intention. It would need to be able to form objectives and
utilize the first property to accomplish them. Also, information gathered from
the environment should be able to inform this.

\- A creative function, constantly running, that constantly combines
information from the first two inputs to form and re-form concepts.

I think that a program with these three aspects, running constantly, could
eventually form opinions and act intelligently. The intention function could
delve into the creative function and form intentions to refactor and make more
efficient the mental processes.

It could have a representation of its own creative function in its
environment, allowing the intention function to explore its own creativity as
an environment. Connections between the three aspects would grow in this
manner, and the program could become deeply introspective.

It would probably take a long time for it to be able to do anything like
human-like thought. We could help the process along by restricting its
environment and giving it 'games' to play. If it's sufficiently fluid in
creating concepts, it could then reuse them for different games.

------
mcguire
The difficulty I'm having with this article is that many people, when
discussing intelligence, invoke some specific capability, X; for Deutsch, X
seems to be "creativity", for Penrose, X may or may not be the ability to grok
the truth or falsehood of an arbitrary statement as if one has the personal
cellphone number of God. Then, they go on to assert "computers (or whatever)
cannot (currently) do X, therefore we need to create a grand theory of X to
create _Artificial (General) Intelligence_ " (or "computers cannot do X,
therefore A(G)I is doomed").

Unfortunately, they never seem to provide proof that X exists, or is true.
Sure, you can provide a plethora of examples of creativity, but the plural of
"anecdote" is not "data", the plural of "data" isn't really "proof", and I
don't know what creativity is well enough to tell what I do commonly (or even
occasionally) is creative and a forward-chaining formal logic thingy printing
out a novel, true formula isn't creative.

Ultimately, that is why I like the Turing test---I don't have to understand
all sorts of magical X's, all I have to do is to give you the benefit of the
doubt if you don't have to have a bunch of truly arbitrary conditions on what
you do.

~~~
mujunto
>Unfortunately, they never seem to provide proof that X exists, or is true

I think we can forgive Deutsch for this, because he's claiming that creativity
is part of an unsolved _philosophical_ problem. Which means we don't know how
to think about it yet. We'll know when when we do know how to think about it
because there'll suddenly arise answerable questions, proofs, definitions, and
whatnot.

Unfortunately the Turing Test can't cut through the philosophical wrangling
because, for example, imitating a human being successfully is not the same
thing as evidence of thinking. Without an explanatory theory we wouldn't even
know how to interpret the relevant evidence

------
alphonse23
Has anyone ever tried using the first law of thermodynamics to prove that AGI
is impossible. It's somewhat dependent on what one considers "intelligent" of
course. But let me give a stab at it.

Say AGI is a computer machine and/or algorithm that's capable of "creatively"
building a smarter version of itself. Then (here's the proof) say we did build
a machine that was capable of building a smarter version of itself, then that
machine would technically be a perpetual motion machine, and therefore a
violation of the first fundamental law of thermodynamics: "In all cases in
which work is produced by the agency of heat, a quantity of heat is consumed
which is proportional to the work done; and conversely, by the expenditure of
an equal quantity of work an equal quantity of heat is produced." (Rudolf
Clausius, 1850)

Or assuming that AGI is just "in the software", the heat produced by the
computation would continually increase, as an ever more complicated
algorithm/computation is formulated, and therefore violate the first law of
thermodynamics -- again. Assuming a more "intelligent" computation consumes
more heat/energy.

------
Houshalter
>observing on thousands of consecutive occasions that on calendars the first
two digits of the year were ‘19’. I never observed a single exception until,
one day, they started being ‘20’...

...it is simply not true that knowledge comes from extrapolating repeated
observations. Nor is it true that ‘the future is like the past’, in any sense
that one could detect in advance without already knowing the explanation.

This is obviously untrue. You could train a machine learning algorithm to
count easily just by showing it examples of numbers. This is how we teach
human children to count, by showing them examples. Not giving them some
magical "explanation".

And he goes on to base the rest of his argument on this point.

>Some people are wondering whether we should welcome our new robot overlords.
Some hope to learn how we can rig their programming to make them
constitutionally unable to harm humans (as in Isaac Asimov’s ‘laws of
robotics’), or to prevent them from acquiring the theory that the universe
should be converted into paper clips (as imagined by Nick Bostrom). None of
these are the real problem. It has always been the case that a single
exceptionally creative person can be thousands of times as productive —
economically, intellectually or whatever — as most people; and that such a
person could do enormous harm were he to turn his powers to evil instead of
good.

Einstein's brain was only slightly different than any other human's, even the
dumbest village idiot. Humans themselves are only slightly different than
chimpanzees.

So yes, an AI with a brain of entirely different architecture, running on
computers millions of times faster and several times larger than a human
brain, would indeed be pretty concerning.

------
3rd3
> The very laws of physics imply that artificial intelligence must be
> possible.

What if the chemical reactions that our intelligence depends on somehow allow
for computations using real numbers? We wouldn’t be able to reproduce these
processes with our means of computation then.

------
jimbokun
"‘Smartest machine on Earth’, the PBS documentary series Nova called it, and
characterised its function as ‘mimicking the human thought process with
software.’ But that is precisely what it does not do."

I heard a grad student who did some work on the Watson project say basically
the same thing.

------
yc-kjh
If (and only if) materialism is true, Deutsch is right. AGI is a foregone
conclusion. It is not only possible; it _must_ eventually happen.

Where he goes wrong is in assuming that materialism is true in the first
place.

------
eli_gottlieb
If AI can't ever work, how does AIXI play video games?

EDIT: And then, of course, there's the AI Effect: "If it didn't kill all
humans, it must not be AI."

------
dnautics
> or predict and prevent a meteor strike on itself

Wait, we don't know that for sure, yet.

------
applecore
_(2012)_

------
moron4hire
So evolutions "works" in the sense that anything that doesn't work doesn't
stick around to show its face. There isn't any intention behind it, yet us
people, who clearly have intention (and I'm not even going to stand for an
argument about whether or not free will exists. You take that shit outside
with the rest of the garbage) are a product of it.

But we're kind of long past the point where just any old random slurry of
chemicals is going to get means-tested in the great arena of life. Life is not
finding the right set of chemicals to combine, life is a specific set of
chemicals and the right orientations of different copies of those chemicals.

So I think we're at a point were binary code, the instructions to run on the
processor, is akin to chemicals in the physical world. We try to treat them
like DNA, but you can't just toss a bunch of chemicals in a bucket at random
and expect life to come out. 100 billion times out of 100 billion times,
random chemicals in a bucket makes you nothing close to life. Ultimately,
chemicals are at the core, but they aren't sufficient. The right chemicals are
needed, and they interact in such a way that infinite variation is the result.

And DNA is a code--a deterministic, exceedingly discrete code. Yet somehow
(hand waiving), from such arises the non-deterministic, comparatively-infinite
variability of human behavior. So in that sense, I don't think he's
_necessarily_ correct that AGI is a "different type of program than we've ever
programmed."

But all that is just to create a system that is intelligent, it's not
intelligent itself. It's a road, not a destination. Living things, on the
other hand, have goals and try to achieve them. Not just have goals, but
generate goals. Create it's own notions of what to do and how to do them and
why the doing of it is important.

You touch fire, your hand recoils, because in you is a system for detecting
potential damage and the understanding that damage is not something you want
on your docket. Computer touches fire, computer recoils, because in it is a
system for detecting potential damage and YOUR understanding that damage is
not good. The computer didn't conclude on its own that damage was bad. It
never had the sense that it existed. And this isn't even an "intelligent"
response, this one is merely instinct.

So I think the big, missing question in AGI is, "what _could_ a computer
want?" We could program a computer to have certain goals, but that is not the
same thing as a computer sitting around and saying, "hey, you know what? Let's
go to the beach this weekend." We foist our own goals on the computer and
instruct it on how to understand those goals, and are disappointed when it
fails to get the point of the goals at all and sits there blinking at us. How
can you ever hope to have an intelligent computer if it isn't intelligent on
its own terms?

IDK, I am rambling. Would you be intelligent or have any hope of becoming
intelligent if you didn't create your own designs on your future, couldn't
perceive anything to test your actions against your desires for the future,
and had no means of your own to ever come about correcting these issues? It
just seems like the only way AGI will happen is through something incredibly
simple that allows a computer to put its own parts together, see the result,
and arbitrarily evaluate it.

------
6d0debc071
> _Unfortunately, what we know about epistemology is contained largely in the
> work of the philosopher Karl Popper and is almost universally underrated and
> misunderstood (even — or perhaps especially — by philosophers). For example,
> it is still taken for granted by almost every authority that knowledge
> consists of justified, true beliefs_

\-------------------

The Gettier problem has been well known in Philosophy since the 60s. Probably
the most famous example would be barn-façades, first showing up in the mid
70s. It's taught in undergraduate courses. It's the second bullet point on the
Standford Encyclopedia of Philosophy's Epistemology article.

\-------------------

> _How could I have ‘extrapolated’ that there would be such a sharp departure
> from an unbroken pattern of experiences, and that a never-yet-observed
> process (the 17,000-year interval) would follow?_

\-------------------

One rather assumes your experience with numbers has included 20 following 19
before and you extrapolated the rules from that and similar experiences with
assigning numbers to things. You were, after all, previously told that it's
how years worked - and doubtless you've lived through at least one year
changing.

\-------------------

> _Even in the hard sciences, these guesses have no foundations and don’t need
> justification. Why? Because genuine knowledge, though by definition it does
> contain truth, almost always contains error as well. So it is not ‘true’ in
> the sense studied in mathematics and logic. Thinking consists of criticising
> and correcting partially true guesses with the intention of locating and
> eliminating the errors and misconceptions in them, not generating or
> justifying extrapolations from sense data._

\-------------------

Be that as it may, some guesses are better than others. Generally the guesses
that are based on more data. You lock someone in a room for 19 years, don't
talk to them beyond the basic social interaction required for them to develop
language, and then ask them to guess what an atom bomb is, they're not going
to get very far. Even guessing what an atom is, or a bomb they'd be hopelessly
out of the context of their experience.

Guesses, in so far as they're meaningful, have foundations. Those foundations
limit what you can guess about and have any practical chance of refining
towards truth with more evidence. You can't start off knowing nothing and then
go to nuclear weapons in one step. You have to make smaller guesses, based on
what you know.

And much though the author you might criticise Bayesian probability, this is
bound up in the idea that probability is based on dividing the search space
between the explanations we can come up with for a thing and then weighting it
with evidence.

