
John Searle: Consciousness in Artificial Intelligence [video] - nolantait
https://www.youtube.com/watch?v=rHKwIYsPXLg
======
bhickey
There isn't much new here. Skip ahead to the first audience question from Ray
Kurzweil
([http://www.youtube.com/watch?v=rHKwIYsPXLg&t=38m51s](http://www.youtube.com/watch?v=rHKwIYsPXLg&t=38m51s)).

Kurzweil, in summary, asks "You say that a machine manipulating symbols can't
have consciousness. Why is this different than consciousness arising neurons
manipulating neurotransmitter concentrations?" Searle gives a non-answer: "My
dog has consciousness because I can look at it and conclude that it has
consciousness."

~~~
chubot
Yeah honestly I don't get what he is really contributing (and I'm sort of an
AI skeptic). In 2000 in undergrad, I recall checking out some of his books
from the library because people said he was important, and I learned about the
"Chinese Room" argument [1] in class.

How is it even an argument? It doesn't illuminate anything, and it's not even
clever. It seems like the most facile wrong-headed stab at refutation, by
begging the question. As far as I can tell, the argument is, "well you can
make this room that manipulates symbols like a computer, and of course it's
not conscious, so a computer can't be either"? There are so many problems with
this argument I don't even know where to begin.

The fact that he appears to think that changing a "computer" to a "room" has
persuasive power just makes it all the more antiquated. As if people can't
understand the idea that computers "just" manipulate symbols? Changing it to a
"room" adds nothing.

[1] [http://plato.stanford.edu/entries/chinese-
room/](http://plato.stanford.edu/entries/chinese-room/)

~~~
drdeca
I thought the idea was that because the only part of the room actually doing
things (the person), doesn't understand chinese?

I mean, agree with it or not, but I think that's a bit stronger than just,
making it seem intuitively worse because its a room instead of "a computer"?

I think the important part isn't the swap of "room" for "computer", but
instead the swap of "person" for "cpu"?

~~~
Chathamization
Yeah, but the system would be the person + the lookup tables, not just the
person. The problem is, we don't tend to say "does a room with a person in it
and several books have this knowledge?" Relying on a system that doesn't tend
to get grouped together (there's no term for the system human + book inside
room), and having only one animate object (so that people think of the animate
object as the system instead of the animate and inanimate objects), as well as
asking the question only about the animate part of the system, all seem to
suggest that the purpose of the thought experiment is to mislead people.

A better example would be saying something like - does this company have the
knowledge to make a particular product? We can say that no individual member
of the company does, but the company as a whole does.

~~~
drdeca
I think this is called the "systems response".

Which, well there's a whole series of responses back and forth, with different
ideas about what is or is not a good response.

One idea describes a machine where each state of the program is pre-computed,
and the computer steps through the states one by one, but in each state, if
the next of the pre-computed states was wrong (i.e. would not be the next step
of the program, following from the current state), if (e.g.) a switch was
flipped, it would cause the program to be computed correctly despite the pre-
computed states being wrong, and if the switch is not flipped, then it would
continue along the pre-computer states. If the switch is flipped on, or if its
flipped off, and all the pre-computer states are correct, the same things
happen, and it does not interact with the switch at all. If all the pre-
computed states are nonsense, and the switch is flipped on, then it runs the
program correctly, despite the pre-computed states being nonsense.

So, suppose that if the pre-computed states are all wrong, and the switch is
on, that that counts as conscious. Then, if the pre-computed states are all
correct, and the switch is on, would that still be conscious? What if almost
all the pre-computed states were wrong, but a few were right? It doesn't seem
like there is an obvious cutoff point between "all the pre-computed steps are
wrong" and "all the pre-computer steps are right", where there would be a
switch between what is conscious. So then, one might conclude that the one
where all the pre-computed steps are right, and the switch is on, is just as
conscious as the one which has the switch on but all the pre-computed states
are wrong.

But then what of the one where all the pre-computed states are right, and the
switch is off?

The switch does not interact with the rest of the stuff unless a pre-computed
next step would be wrong, so how could it be that when the switch is on, the
one with all the pre-computations is conscious, but when it is off, it isn't?

But the one with all the pre-computations correct, and the switch off, is not
particularly different from just reading the list of states in a book.

If one grants consciousness to that, why not grant it to e.g. fictional
characters that "communicate with" the reader?

One might come up with some sort of thing like, it depends on how it interacts
with the world, and it doesn't make sense for it to have pre-computed steps if
it is interacting with the world in new ways, that might be a way out. Or one
could argue that it really does matter if the switch is flipped one way or the
other, and when you flip it back and forth it switches between being actually
conscious and being, basically, a p-zombie. And speaking of which you could
say, "well what if the same thing is done with brain states being pre-
computed?", etc. etc.

I think the Chinese Room problem, while not conclusive, is a useful
introduction to these issues?

~~~
FeepingCreature
> But the one with all the pre-computations correct, and the switch off, is
> not particularly different from just reading the list of states in a book.

The states were (probably) produced by computing a conscious mind and
recording the result.

Follow the improbability. The behavior has to come from _somewhere_. That
somewhere is probably conscious.

Similarly, authors are conscious, so they know how conscious characters
behave.

------
DonaldFisk
I think Searle's mostly correct and Kurzweil's completely wrong on this. It
took me a long time to understand Searle's argument, because Searle conflates
consciousness and intelligence and this confuses matters. Understanding
Chinese is a difficult problem requiring intelligence, but I don't think it
requires consciousness.

It is important to distinguish between "understanding Chinese" and "knowing
what it's like to understand Chinese". We immediately have a problem: knowing
what it's like to understand Chinese involves various qualia, none of which is
unique to Chinese speakers.

So I'll simplify the argument. Instead of having a room with a book containing
rules about Chinese, and a person inside who doesn't Chinese, we have a room,
with some coloured filters, and a person who can't see any colours at all
(i.e. who has achromatopsia). Such people (e.g.
[http://www.achromatopsia.info/knut-nordby-
achromatopsia-p/](http://www.achromatopsia.info/knut-nordby-achromatopsia-p/))
will confirm they have no idea what it's like to see colours. If you shove a
sheet of coloured paper under the door, the person in the room will place the
different filters on top of the sheet in turn, and by seeing how dark the
paper then looks, be able to determine its colour, which he'll write on the
paper, and pass it back to the person outside. The person outside thinks the
person inside can distinguish colours, but the person inside will confirm that
not only can he not, but he doesn't even know what it's like. Nothing else in
the room is obviously conscious.

A propos of the dog, this is the other minds problem. It's entirely possible
that I'm the only conscious being in the universe and everyone else (and their
pets) are zombies. But we think that people, dogs, etc. are conscious because
they are similar to us in important ways. Kurzweil presumably considers
computers to be conscious too. Computers can be intelligent, and maybe in a
few years or decades will be able to pass themselves off over the Internet as
Chinese speakers, but there's no reason to believe computers have qualia (i.e.
know what anything is like), and given the above argument, every reason to
believe that they don't.

~~~
redwood
I disagree completely. After time the color filter will start to associate
various concepts and feelings add images with various colors. This association
is what starts making the colors themselves have meaning even if they can't
see the colors the same way that you and I can. There's no way to prove that
we all see colors the same way anyway. But that doesn't mean that we don't
believe that were conscious. I think I see that you're saying we cannot make
any claims about others perhaps but only can talk about how we feel. But I
feel like the room example is actually misleading in this respect. Another way
of thinking about it is our brain starts to associate things and if those
clusters of associations that give those things meaning. The experience of
experience and color is only important because experience and color has a web
of other associated experiences that those colors remind us of. So extending
the room experiment to the experience of a baby who throughout the entire life
sees colors or the filter image version of these colors at various moments to
associate with various things. In this example we can imagine that the baby
will in fact associate let's say blue with I don't know this great unknown
half of our outside ceiling that we see during the day. And then that will
take on something more to it but it is difficult admittedly to explain.

~~~
DonaldFisk
> After time the color filter will start to associate various concepts and
> feelings add images with various colors. This association is what starts
> making the colors themselves have meaning even if they can't see the colors
> the same way that you and I can.

The filters are just pieces of transparent coloured plastic. How are they
capable of forming associations?

Also, associations on their own (e.g. blue with sky, red with blood, green
with grass) don't give you any idea what colours are like. Knut Nordby (and
many other people with achromatopsia) knew these associations as well as you
or I know them, but made it quite clear that he had no idea what it was like
to see in colour.

------
nova
I can only recommend reading this paper:
[http://www.scottaaronson.com/papers/philos.pdf](http://www.scottaaronson.com/papers/philos.pdf)

It really lives up to its title. Suddenly computational complexity is not just
a highly technical CS matter anymore, and the Chinese Room paradox is
explained away successfully, at least for me.

------
amoruso
Searle makes two assertions:

1) Syntax without semantics is not understanding. 2) Simulation is not
duplication.

Claim 1 is a criticism of old-style Symbolic AI that was in fashion when he
first formulated his argument. This is obviously right, but we're already
moving past this. For example, word2vec or the recent progress in generating
image descriptions with neural nets. The semantic associations are not nearly
as complex as those of a human child, but we're past the point of just
manipulating empty symbols.

Claim 2 is an assertion about the hard problem of consciousness. In other
words, about what kinds of information processing systems would have
subjective conscious experiences. No one actually has an answer for this yet,
just intuitions. I can't really see why a physical instantiation of a certain
process in meat should be different from a mathematically equivalent
instantiation on a Turing machine. He has a different intuition. But neither
one of us can prove anything, so there's nothing else to say.

~~~
mtrimpe
I think Claim 1 is actually more about determinism; that if by knowing all the
inputs you can reliably get the same outputs what you have isn't
consciousness.

Neural nets are somewhat starting to escape that dynamic but there still isn't
a neural net that reliable pulls in a continuous stream of randomness to
generate meaningful behaviour like our consciousness does.

Now to be honest; I'm not _entirely_ sure if John Searle would agree that that
_is_ consciousness when we do get there but I do agree with him that
deterministic consciousness is essentially a contradictio in terminis.

------
cromwellian
The systems response is pretty much the right answer. You can put yourself at
any level of reductionism of a complex system and ask how in the hell the
system accomplishes anything. If you imagine yourself running a simulation of
physics on paper for the universe, you may ask yourself, how does this
simulation create jellyfish.

I think people fall for Searle's argument the same way people fall for
creationist arguments that make evolution seem absurd. Complex systems that
evolve over long periods of time have enormous logical depth complexity and
exhibit emergent properties that really can't be computed analytically, but
only but running the simulation, and observing macroscopic patterns.

If I run a cellular automaton that computes the sound wave frequencies of a
symphony playing one of Mozart's compositions, and it takes trillions of steps
before even the first second of sound is output, you can rightly ask, at any
state, how is this thing creating music?

------
spooningtamarin
Consciousness and understanding are human created symbolism. Talking about it
seriously is a waste of time.

I could be an empty shell imitating a human perfectly, all other humans would
buy my lack of consciousness, and nothing would be different, from their
perspective I exist, from mine, I don't have mine.

How does one know that I really understand something? Maybe I can answer all
the questions to convince them?

------
kriro
It's pretty frustrating to watch. Feels like an endless repetition of "well
humans and dogs are conscious because that's self evident". There's no
sufficient demarcation criterion other than "I know it when I see it" that he
seems to apply. [I guess having a semantics is his criterion but he doesn't
elaborate on a criterion for that]

The audience question about intelligent design summed up my frustration nicely
(or rather the amoeba evolving part of it).

------
sethev
I think what it boils down to is that Searle believes consciousness is a real
thing that exists in the universe. A simulation of a thing isn't the same as
the thing itself, no matter how accurate the outputs. The Chinese Room
argument just amplifies that intuition (my guess is that the idea of a room
was inspired by the Turing Test).

I think studying the brain (as opposed to philosophical arguments) is the
thing that will eventually answer these kinds of questions, though.

------
pbw
I think the argument about consciousness is vacuous. Searle admits we might
create an AI which acts 100% like a human in every way.

Nothing Searle says stands in the way of creating intelligent or super-
intelligent entities. All Searle is saying is those entities won't be
conscious.

No can prove this claim today. But more significantly I think it's extremely
likely no one will ever prove the claim. Consciousness is a private subjective
experience. I think it's likely you simply cannot prove it exists or doesn't
exist.

Mankind will create a human-level robots and we'll watch them think and create
and love and cry and we'll simply not know what their conscious experience is.

Even if we did _prove_ it one way or the other, the popular opinion would be
unaffected.

Some big chunk of people will insist robots are conscious entities who feel
pain and have rights. And some big chunk of people will insist they are not
conscious.

It might be our final big debate. An abstruse _proof_ is not going to change
anyone's mind. Look at how social policies are debated today. Proof is not a
factor.

------
orblivion
So, supposing there's any chance that it has consciousness, is there any sort
of movement doing all it can to put the brakes on AI research? If it's true,
it's literally the precursor to the worst realistic (or hypothetical, really)
outcome I can fathom, which has been discussed before on HN (simulated hell,
etc). I'm not sure why more people aren't concerned about it. Or is it just
that there's "no way to stop progress" as they say, and this is just something
we're going to learn to live with, the way we live with, say, the mistreatment
of animals?

~~~
adrianN
We are sufficiently far away from creating machines that humans would consider
to have consciousness that it's not really a problem so far. Eventually we'll
probably have to think about robot rights, but I guess we still have a few
decades until they're sufficiently advanced. But judging from how we treat,
eg. great apes, who are so very similar to us, I wouldn't want to be a robot
capable of suffering.

~~~
orblivion
I'd think that if there are people forward thinking enough to consider the
consequences to humans (Elon Musk, Singularity Institute), there should be
people forward thinking enough to consider the consequences to the AIs.

------
nnq
This guy so smart but at the same time such an idiot. SYNTAX and SEMANTICS are
essentially SAME THING. It's only a context-dependent difference, and this
difference is quantitative, even if we still don't have a good enough
definition of what those quantitative variables underlying them are. You must
have a really "fractured" mind not to instantly "get it". And "INTRINSIC" is
simply a void concept: nothing is intrinsic, everything (the universe and all)
is obviously observer dependent, it just may be that the observer can be a
"huge entity" that some people choose to personalize and call God.

It's amazing to me that people with such a pathological disconnect between
mind and intuition can get so far in life. He's incredibly smart, has a great
intuition, but when exposed to some problems he simply can't CONNECT his
REASON with his INTUITION. _This is a MENTAL ILLNESS and we should invest in
developing ways to treat it, seriously!_

Of course that "the room + person + books + rule books + scratch paper" can be
self conscious. You can ask the room questions about "itself" and it will
answer, proving that it has a model of itself, even if that model is not
specifically encoded anywhere. It's just like mathematics, if you have a
procedural definitions for the set of all natural numbers (ie. a definition
that can be executed to generate the first and the next natural number), you
"have" the entire set of natural numbers, even if you don't have them all
written down on a piece of paper. Same way, if you have the processes for
consciousness, you have consciousness, even if you can't pinpoint "where" in
space and time exactly is. Consciousness is closer to a concept like "prime
numbers" than to a physical thing like "a rock", you don't need a space and
time for the concept of prime numbers to exist in, it just is.

His way o "depersonalizing" conscious "machines" is akin to Hitler's way of
depersonalizing Jews, and this "mental disease" will probably lead to similar
genocides, even if the victims will not be "human" ...at least in the first
phase, because you'll obviously get a HUGE retaliation in reply to any such
stupidity, and my bet it that such a retaliation will be what will end the
human race.

Now, of course the Chinese room discussion is stupid: you can't have "human-
like consciousness" with one Chinese room. You'd need a network of Chinese
rooms that talk to each other and also operate under constraints that make
their survival dependent on their ability to model themselves and their
neighbors, in order to generate "human-like consciousness".

~~~
nsns
Well, it's Searle after all. It's always funny to re-read Derrida's attack on
his problematic line of thought[0].

0\.
[https://en.wikipedia.org/wiki/Limited_Inc](https://en.wikipedia.org/wiki/Limited_Inc)

