
Ask HN: Opinion about Searle's Chinese Room? - wsieroci
Hi,<p>I would like to know your opinion about John Searle&#x27;s Chinese Room argument against &quot;Strong AI&quot;.<p>Best,
Wiktor
======
cousin_it
Some people are uncomfortable with the idea that brains can work on "mere"
physics. But what's the alternative? A decree from Cthulhu saying human brains
are the only devices in the universe that can communicate with consciousness,
and all other objects are doomed to be dumb matter? I think that's really
unlikely. So it seems like we must bite the bullet and admit that the
computation implemented in the Chinese room exhibits genuine understanding.
I'm not completely certain, because we don't really understand consciousness
yet and we could always find new evidence, but the evidence so far points in
one direction.

------
fallingfrog
Well, a Chinese room can simulate a computer, a computer can run a physics
simulation and a brain is a physical thing and brains are conscious. So it's
pretty open and shut imho; yes, a Chinese room can be conscious. That much is
clear. Now what it means to say that something is conscious is another
question. I've never seen the word rigorously defined.

------
lazyant
we had a thread just yesterday
[https://news.ycombinator.com/item?id=10867791](https://news.ycombinator.com/item?id=10867791)
and also last month
[https://news.ycombinator.com/item?id=10740748](https://news.ycombinator.com/item?id=10740748)

~~~
p4wnc6
I'm incredibly surprised that out of both previous threads, the piece "Why
Philosophers Should Care About Computational Complexity" by Scott Aaronson [1]
was only mentioned in one comment, and seemed to spark no further discussion.

Aaronson dedicates significant space specifically to the Chinese Room problem,
and has a good literature review of different computer science objections (my
favorite was probably the one estimating the entropy of English and coming up
with space bounds on the size of all 5-minute long conversations in English).

It is one of the most comprehensive takedowns of the Chinese Room problem.

Along similar lines, Aaronson discusses later in the paper the problem "Does a
rock implement every Finite-State Automaton?" popularized by Chalmers [2] and
many computer science rebuttals.

[1]
[http://www.scottaaronson.com/papers/philos.pdf](http://www.scottaaronson.com/papers/philos.pdf)

[2] [http://consc.net/papers/rock.html](http://consc.net/papers/rock.html)

~~~
makebelieve
Aaronson does not critique the chinese room argument at all, at least not in
the paper cited.

~~~
samizdatum
He does, on page 14.

~~~
makebelieve
Um, he doesn't. He is talking about Searle's argument, but he is not refuting
it. Explicitly he says:

"Where others invoked complexity considerations to argue with Searle about the
metaphysical question, I’m invoking them to argue with Penrose about the
practical question."

Aaronson is not arguing with Searle at all, he is using Searle's argument as
an example of other peoples faulty thinking about complexity. Aaronson
understands (which is why he doesn't elucidate Searle's argument) that Searle
is talking about meaning, and Aaronson is criticizing other peoples critics of
Searle that are based in computation for failing to understand complexity:

"I find this response to Searle extremely interesting—since if correct, it
suggests that the distinction between polynomial and exponential complexity
has metaphysical significance. According to this response, an exponential-
sized lookup table that passed the Turing Test would not be sentient (or
conscious, intelligent, self-aware, etc.), but a polynomially-bounded program
with exactly the same input/output behavior would be sentient."

More to the point, Aaronson doesn't address the meaning (or as he says, the
metaphysical) questions at all. He is interested in the complexity problem of
producing a machine that passes the Turing Test, and how philosophers don't
seem to grok that very practical problem. Searle recognizes the practical
problem for what it is (lookup tables can be Turing complete) and talks about
meaning and asks us to consider where the meaning of things are, and shows
that meaning does not exist in the functions or the data the functions
process. So that even if a machine passes the Turing Test, it fools the
observer. The machine still would not be "intelligent"; it would not be
conscious.

~~~
p4wnc6
Can you elaborate on when you say "lookup tables can be Turing complete" as
opposed to just able to pass some given Turing test? I feel like there's some
conflation about what lookup table we're talking about.

Also, my reading of that passage by Aaronson is very different from yours. I
read it as him saying, these computer scientists have put forward a fairly
serious and compelling set of arguments that the actual complexity class of
the translator algorithm has philosophical importance. Meanwhile, the
analytical philosophy response is just to say something hand-wavy about
"meaning."

I agree Aaronson's not saying anything definitive (he is extremely
conservative about making definitive claims, despite how hotly involved he
becomes in the few cases when he _does_ make definitive claims). But I don't
agree that he is framing it to raise questions about the validity of the CS
rebuttal papers that he cites.

~~~
makebelieve
Searle uses a lookup table argument in the chinese room. I was making the case
that lookup tables as a computational tool can be turing complete, and I'm
assuming Searle covered Turing completeness in his argument. (I read the
chinese room a long time ago so even if he doesn't cover turning machines
explicitly, He has argued elsewhere, explicitly, that Turing machines and the
kinds of outcomes they can produce do not get us past the problem of meaning
elucidated by the Room argument.)

I think Aaronson demolishes the other critics arguments because he shows they
focus on the lookup table and attach sentience to algorithmically complex
solutions but not to exponentially complex solutions. My point is the lookup
table is irrelevant, in practical terms, because the lookup table in Searle's
argument exists only as a "philosophical fiction" as Aaronson says. But I was
pointing out that lookup tables can be Turing complete. And hence any Turing
machine could be substituted for the mechanism in Searles's room and thus the
particular mechanism of the room's operation is irrelevant. (in any kind of
Turing completeness sense)

I took Aaronson as being humorous here: "Yet, as much as that criterion for
sentience flatters my complexity-theoretic pride, I find myself reluctant to
take a position on such a weighty matter." Because there is no obvious reason
an algorithmically complex solution should somehow be sentient when an
exponentially complex solution should not be. How could the lower mathematical
bound confer sentience?

Aaronson's paper is about the practical requirements to pass a Turing test in
some given amount of time. It is a testable problem. Searle's argument is
about what it means to produce actual sentience. Aaronson does not really get
into this.

There is an argument against Searle along the lines of "what are the
requirements for a machine which passes the Turing test for Searle." And
Searle's response to these practicalities are weak, at best. But those
arguments have nothing to do with Searle's point in the Chinese Room. Aaronson
sort of reflects those critiques of Searle, but he also realizes the hand-wavy
problem of meaning is something he doesn't address.

Personally, I get very frustrated when people mistake the problem of sentience
for the testable hypothesis of a Turing test (or any of the other "practical"
problems). I think the problem of sentience is a real problem, and it requires
a practicable solution to produce machine sentience, machines which have and
understand meaning. So arguments against Searle's Room that do not address how
to instantiate meaning in a computer system are disappointing because they
ignore his basic point. (Aaronson is making arguments about complexity and
Turing tests) Ignoring the key problem is not a critique of that problem. And
critiquing an argument is not necessarily a critique of the point or concept
the argument elucidates.

Meaning is a real thing.

If you sit down to make a machine conscious, you have to deal with what
awareness is and how meaning and representation work- at the very beginning.
And then figure out how to make computers do representational processing and
instantiate awareness.

All of the modern approaches abandon the problem of actual sentience and the
problems of meaning; because, they are hard. Or it's too hard to finish in the
timeline of a PhD. So people do the reverse, start with the algorithms and
solve a testable sub-problem and make some practical progress in computer
science or in industry. (which is a good thing!)

Nearly everyone abandons the hard problem of meaning and how meaning works and
chooses to solve a different problem. This does not mean our solutions to
those other problems are solutions to the hand-wavy problem of meaning. It
rather makes me think of people who figured how to make fake feathers and then
assumed the process of making fake feathers will naturally lead to human
flight.

I think this is a clue that the typical computer science approach, which has
made great progress in what we call artificial intelligence, is maybe the
wrong approach to solve the sentience problem. Not that computer science is
irrelevant, but that the general computer science approach simply does not
provide a path toward, or the theoretical foundation, to make computers which
are aware and can generate and understand meaning.

~~~
p4wnc6
> Searle uses a lookup table argument in the chinese room.

I know that. I was asking about your claim that a lookup table can be Turing
complete, as opposed to merely able to pass a given Turing test. A lookup
table does not express universal computation, at least not on its own. For
instance, you couldn't use a fixed-size lookup table (with fixed, immutable
content) to express some floating point number of precision N, where N is so
high that even just the bits required to represent it outnumbers the available
space in the lookup table. You'd need something "outside of" the lookup table
that could mutate its memory, or hold state of its own while reading from the
lookup table. The complexity class of doing any of this would be irrelevant to
Turing _completeness_.

On the topic of the complexity threshold for sentience, I think the reasons
are extremely simple and obvious. If you take a Turing test to involve
convincing a human being of something (as it was originally conceived), then
there are inherent time constraints to performing actions. If you can
adequately respond to any question posed in English, but it takes you longer
than the age of the universe to do so, then as far as anyone is concerned, you
_actually can 't_ do it.

Imagine that one day we create an artificial being which we all agree is
sentient. Suppose, without being too precise, that the entity "runs" in some
amount of time -- like it can answer verbal questions in matters of minutes,
it can solve complicated problems in matters of hours or days, and so on.

Now suppose we can play around with the insides of that being and we have some
sort of dial that lets us "slow it down." We click the dial once, and the
being now takes hours to make verbal responses, and takes weeks to solve
complex problems. We click the dial again and it takes weeks to make a verbal
response, and months to solve a complex problems... and so on.

I think it is reasonable to believe that after enough clicks the entity _is
not sentient_ even if the underlying software is conceptually similar and just
differs by controlling its complexity class.

I'm not saying I agree that this definitely defines sentience. I'm just saying
it's extremely reasonable that _it might_ and it's not at all obvious to me
that an exponentially slow entity, _if we could wait long enough to verify_ ,
is "just as sentient" as an entity that is verifiably more efficient.

Even further, the particular argument that I pointed out actually used
estimated numbers to come up with _space_ bounds, __not __time bounds.
Basically, the argument was that even if you truncate to considering _merely_
5-minute-long English conversations (or shorter), then creating a lookup table
to be able to look up responses is _physically_ impossible -- who even cares
how long it would take to actually _use_ it if you could create it -- you
can't even create it, and so it is somewhat fallacious to even begin a thought
experiment by saying "suppose there is this big lookup table that physics
logically excludes from possibility." It's just not even useful, _not even as
a thought experiment, because it doesn 't speak about what is possible, from
first principles._

~~~
makebelieve
Your points are all good. But they have nothing to do with meaning, or with
semantics.

Cellular automata are lookup tables, and Wolfram and others proved some
cellular automata rules are Turing complete computers.
[https://en.wikipedia.org/wiki/Rule_110](https://en.wikipedia.org/wiki/Rule_110)
My point was merely about the equivalence of computational mechanisms, not
about lookup tables per se. And by corollary, that the computational
complexity is equivalent regardless of the computational mechanism. (I think
we agree on this point.)

Searle's Room is a just to explain that what computers are doing is syntactic.

Searle would posit that passing a Turing test in any amount of time is
irrelevant to determining consciousness. It's a story we hypothetical use to
"measure" intelligence, but it's only a story. it's not a valid test for
sentience, and passing such a test would not confer sentience. Sentience is an
entirely different question.

What would be more interesting is if a computer intentionally failed turing
tests because it thinks turing tests are stupid.

We could test humans with Turing tests to determine their "level" of
intelligence. But if you put teenage boys in a room and made them do Turing
tests, pretty quick they would come up with remarkable ways to fail those
tests, chiefly by not doing them! How could you write a program or create a
system that intentionally fails Turing tests? or a program which avoids taking
turing tests... because it thinks they are stupid?

Could you write a program that knows when it fails? (it's a pretty long
standing problem...)

I like the speed (or space-bound question) you ask because it is not a thought
experiment to me. It's an actual real problem I face! at what point does the
speed of the underlying computing become so interminably slow that we say
something is no longer sentient? In my work, I don't think there is some such
slow speed. The slowness simply obscures sentience from our observation.

In the excellent example: "I think it is reasonable to believe that after
enough clicks the entity is not sentient..."

How would you distinguish between the "loss" of sentience from reduced
complexity, from the loss of your ability to perceive sentience from the
reduced complexity? The question is, how could you tell which thing happened?
If you don't observe sentience anymore, does that mean it's not there? (Locked
in syndrome is similar to this problem in human beings.) And if you have a
process to determine sentience, how do you prove your process is correct in
all cases?

I do not think of these as rhetorical questions. I actually would like a
decent way to approach these problems, because I can see that I will be
hitting them if the model I am using works to produce homeostatic metabolic
like behavior with code.

Computation is a subset of thinking. There is lots of thinking that is not
computation. Errors are a classic example. The apprehension of an error is a
representational process, and computation is a representational process. We
may do a perfectly correct computation, but then realize the computation
itself is the error. (As a programmer learns, it is exactly these realizations
that lead to higher levels of abstraction and optimization.)

Searle's point is that a lookup table or any other computational mechanism,
can not directly produce sentience because it's behavior is purely syntactic.
"Syntax is not semantics and simulation is not duplication."
[https://www.youtube.com/watch?v=rHKwIYsPXLg](https://www.youtube.com/watch?v=rHKwIYsPXLg)

Aaronson's points are very well made, but none of them deal with the problem
of semantics or meaning. Because they don't deal with what representation is
and how representation itself works. All of the complexity work is about a
sub-class of representations that operate with certain constraints. They are
not about how representation itself works.

> "suppose there is this big lookup table that physics logically excludes from
> possibility."... That is the point!

Even if there were such a lookup table, it would not get us to sentience,
because it's operations are syntactic. It is functional, but not meaningful.
You are correct, it could never work in practice, but it could also never work
under absolute conditions. That's why I figured Aaronson was poking fun of
those critiquing Searle, because it would ALSO, not work in practice.

Aaronson writes, "I find this response to Searle extremely interesting—since
if correct, it suggests that the distinction between polynomial and
exponential complexity has metaphysical significance. According to this
response, an exponential-sized lookup table that passed the Turing Test would
not be sentient (or conscious, intelligent, self-aware, etc.), but a
polynomially-bounded program with exactly the same input/output behavior would
be sentient."

This statement supports Searle's argument, it doesn't detract from it.
Hypothetically, an instantaneous lookup of an exponential table system would
not be sentient but an instantaneous lookup of an algorithmically bound table
system would be sentient? On what basis then does sentience confer, if the
bound is the only difference between the lookup tables? Introducing the
physical constraints doesn't change the hypothetical problem.

Searle and Aaronson are just talking about different things.

If Aaronson was actually refuting Searle, what is the refutation he makes?

Aaronson never says something like "Computers will be sentient by doing x, y,
and z, and this refutes Searle." The arguments against Searle (which I take
Aaronson as poking at) are based in computation. So... show me the code!
Nobody has written code to do semantic processing because they don't know how.
It could be no one knows how because it's impossible to do semantic processing
with computation - directly.

That is my view from repeated failures, there simply is no path to semantics
from symbolic computation. And if there is, it's strange voodoo!

~~~
p4wnc6
I think the reference to cellular automata is a bit misplaced. Yes, Rule 110
is Turing complete, but I don't think this has anything to do with the sort of
lookup table that Searle is appealing to. You can _write programs_ with Rule
110, by arranging an initial state and letting the rules mutate it. However, a
lookup table that merely contains terminal endpoints can't do that. It doesn't
have the necessary self-reference.

People always like to say this about Matthew Cook's result on Rule 110 and
connect it to Searle's argument, but they are just totally different things.
If Searle instead talked about encoding a general purpose AI program to
translate sentences, and his substrate of computation happened to be a
cellular automata, that's fine, but it would be no different than him
postulating an imaginary C++ program that translates the sentences, meaning he
would be assuming a solution to A.I. completeness from the start, whether it
is via cellular automata or some typical programming language or whatever.

But the type of lookup table he is talking about is just an ordinary hash
table, it's just a physical store of fixed immutable data which is not
interpreted as self-referencing in a programmatic sense, but instead which
simply holds onto, and is "unaware" of, translation replies for each possible
input.

~~~
makebelieve
I was not trying to connect rule 110 to Searle's argument per se, but rather
to the critique of Searle's argument. Namely, that criticisms of the lookup
table are not criticisms of Searle's argument or the point he makes. a C++
program, brainf __k, a CA, a one instruction set computer, or whatever
computational process is used doesn 't matter. The lookup table is just one
component of the Rooms operation. I agree Searle is talking about a hash
table, but he is also talking about the rules to interface an input value to a
set of possible output values via some mechanical process, and the man in the
room acts as a kind of stack machine.

You are right, Searle isn't making an argument about the translation of
sentences. (translating them to what?)

He is making an argument about how the mechanism of computation cannot capture
semantic content. He explains this in the google video very well:
[https://www.youtube.com/watch?v=rHKwIYsPXLg](https://www.youtube.com/watch?v=rHKwIYsPXLg)

And all of the... let's call them "structural" critiques are moot. Searle's
point is that computer systems cannot understand semantic content because they
are syntactic processing machines. And he shows this with his argument.

The opposite view is that computers can understand semantic content. (so there
is understanding and there is meaning understood by the computer) and the
reason Searle doesn't believe computers can do this is because his argument is
flawed.

Which leaves us with a small set of options:

1) That the structure Searle proposes can in fact understand semantic content
and Searle just doesn't understand that it does.

I don't think anyone believes this. My iphone is certainly more capable, a
better machine, with better software than Searle's room, and no one believes
my iphone understands semantic content. so the belief the Room does understand
semantic content but not my iphone is plainly false.

2) Searle's Room is simply the wrong kind of structure, or the Room is not a
computer, or not a computer of sufficient complexity and therefore it cannot
understand semantic content

I think this is the point you are making, but correct me if I'm wrong. This is
not an objection against Searle's point. It's a critque of the structure of
the argument, but not the argument itself. Searle could rewrite his argument
to satisfy this objection, but it wouldn't change his conclusion.

Which bring us to the generalized objection:

3) that sufficient complex computer would understand semantic content.

Aaronson's paper is about the complexity problem and how a sufficiently
complex system would APPEAR to understand semantic content by passing a Turing
test within some limited time.

There are many arguments to this line of reasoning. One of them is that all
such limitations are irrelevant. You yourself are not engaged in a limited
time turing test, no person is. The issue is not passing turing tests, it
instantiating sentience.

But thinking about complexity gets us off the root of the objection. You
intuit that increasing or decreasing complexity should give us some kind of
gradient of sentience. So an insufficiently complex system would not be
sentient and would not understand semantic content, but this isn't what Searle
is arguing.

Searle is demonstrating that no syntactic processing mechanism can understand
semantic content. Understanding semantic content is a necessary condition for
sentience, therefore no computer which does syntactic processing can be
sentient. A gradient of complexity related to sentience is irrelevant.

In the one case: our computers become so complex it becomes sentient ->
because it is sentient it can understand semantic content. Vs. understand
semantic content and that leads to sentience.

The gradient of complexity to sentience is an intuition. Understanding of
semantic content can be atomic. Even if a computer only understands the
meaning of one thing, that would disprove Searle's argument. A gradient of
complexity isn't necessary. Searle is saying there is a threshold of
understanding semantic content that a computer system must pass to even have a
discussion about actual sentience. And if a computer is categorically
incapable of understanding semantic content, it is therefore incapable of
becoming sentient.

Said another way, sentience is a by-product of understanding semantic content.
Sentience is not a by-product of passing turing tests. The complexity required
to pass a turing test, even of finite or infinite length, says nothing about
whether a machine does or does not understand semantic content.

All the structural critiques of Searle fail because they do not offer up a
program or system that understands semantic content.

Show me the code that runs a system that understands semantic content. Even
something simple, like true/false. or cat/not a cat. If Searle's structure of
the room is insuffiently complex, then write a program that is sufficiently
complex. And if you can't, then it stands to reason that Searle at least might
be correct: computers, categorically, cannot understand semantic content
BECAUSE they do syntactic processing.

Google's awesome image processing that can identify cats does not know what a
cat is at all. It simply provides results to people who recognize what cats
are, and recognize that the google machine is very accurate at getting the
right pictures. but even when google gets it wrong, it does not know the
picture does not have a cat in it. In fact, the google machine does not know
if what it serves up is a cat picture even if there is a cat in the picture.

The Searle Google talk covers this very well:
[https://www.youtube.com/watch?v=rHKwIYsPXLg](https://www.youtube.com/watch?v=rHKwIYsPXLg)

If you fed googles cat NN a training corpus of penguin pictures and ranked the
pictures of penguins as successes, it would serve up penguins as if they were
cats. But no person would ever tell you a cat is a penguin. Because penguins
and cats are different things, they have different semantic content.

I would love to see that Searle is wrong. I'm sure he would be just as
pleased. So I am curious if you do have or know of a machine that does do,
even the smallest amount, of semantic processing. Because solving that problem
with symbolic computation would save me a ton of effort.

~~~
wsieroci
"The approach I am taking is a kind of metabolic computing"

1\. What exactly do you mean by "a kind of metabolic computing"?

2\. What is first step you want to accomplish?

3\. What do you think (feel) is actually happening in any sentient animal that
leads to semantic content? How it is possible that this happens? We know that
it happens because we are sentient animals. The question is: where is this
difference because as animals we are also machines and it seems that
everything what is happening in our cells is purly syntactical.

~~~
makebelieve
If we look at how organism manage semantic information, we know it is done
with cells and cell clusters and making "connections" between cells in nervous
systems. (it isn't all nervous system cells though). The cells exist and
function because of the molecular activity that goes on in the cell and to a
lesser degree the surrounding environment. (a hostile environment can destroy
cells through molecular interactions). But there is not "cell" level phenomena
the produces cells or their behavior. It's all molecular interactions.

Molecules are driven not by exterior phenomena, but by changes intrinsic to
the molecules and atoms and other particles they interact with. We live in a
particle universe. We do not live in a universe with outside "forces" or
"laws" that cause the particles to behave in any way. Everything about the
physics is from the inside out, and the interactions are always "local" to
particles. Large scale phenomena are actually huge quantities of particle
phenomena that we perceive as a single large phenomena. (this is a kind of
illusion).

When we try to write programs that simulate physical phenomena, like atoms, or
molecules we write the code from the outside in. It is the program which
causes the data changes to simulate some chemistry. But in nature, there is no
program determining how the molecules act. chemical changes occur because of
features about the individual molecules interacting, not because of a rule.
Simulations like this do not replicate what happens between individual
molecules, the replicate what would happen if molecules were controlled by an
external rule (which they are not).

any rule based simulation can only express the set of possible outcome
conditions from the rules and data. but it cannot capture it's axioms, and it
cannot capture conditions that in fact exist outside it's axiomatic boundary.
(Aaronson and p4wnc6 both remark on this limitation by pointing out the
complexity necessary to achieve a good Turing test result or sentient AI).

My approach is to treat this intrinsic nature of molecular interactions as a
fact and accept it as a requirement for a kind of computer system that can do
"molecular interactions" from the inside out. And my supposition is (not
proved yet!) that a mixture of such interactions could be found that is
stable, that would be homeostatic. And if such a mixture could be found, then
could a mixture be found that can be encapsulated in a membrane like
structure. And could such a mixture store in it's set of code/data like
"molecules" it's internal program - eg. DNA.

I think the answer is yes.

There are three different steps that all have to work together.

One is understanding how representation works (see my email to you, it's
outside the bounds of this thread). So understanding how semantic content and
awareness works, in all situations and conditions, is a precondition to
recognizing when we have code that can generate semantic content.

The next is finding a model of how representation is instantiated in organisms
to use as a basis for a machine model.

The third is then coding the machine model, to do what organisms do so that
the machine understands semantic content, and the machine should produce
awareness and consciousness.

I believe metabolic functioning is the key feature to allow us to do
representational processing. hence why I call the approach I am taking,
metabolic computing. The step I am currently on is writing up an interpreter
that I think can do "molecular" interactions between code/data elements.
Meaning that the data/code elements determine all the interactions between
data and code intrinsically. the interpreter processes those "atomic"
interactions based on intrinsic features of that code. Essentially, every bit
of code/data is a single function automata and they can all change each other
so the system may or may not work dependent on the constituent "molecules" of
the system. I call this "the soup".

previous prototypes required me to do all the addressing, which itself was a
big leap forward for me. But now the code/data bits do the addressing
themselves. (each function interacts "locally" but interactions can create
data structures, which is the corollary to molecules forming into larger
structures and encapsulating molecular interactions into things like
membranes).

So the next step is finish the interpreter, then see if I can get the right
soup to make functions (like dna and membranes. I've written out RNA like
replication examples and steady state management as discussed in systems
biology so I there is a path forward). Then see if I can get to homeostasis
and a "cell" from datastructures and interacting "molecules". the step after
that is multiple "cells" and then sets of cells that form structures between
inputs and outputs. eg. a set of "retina" cells that respond to visual inputs,
a set of cells that "process" signals from those retina cells, and motor cells
that take their cues from "process" cells etc.

the cell level stuff and above is mostly straightforward. it's forming
different kinds of networks that interact with each other. Nodes themselves
are semantic content. but how do you make networks from the inside out? from
meaningless (syntactic) molecular interactions? that is where the metabolic
systems (and stigmergy) come into play. (actually, stigmergy comes into play
at many levels)

In biology, the syntactic to semantic jump happens at the cell. the cell
itself is a semantic thing. the syntactic processes maintain the cell. the
cells underlying mechanims and interactions are all syntactic. and the cell
doesn't "cause" anything, everything happens in the cellular processes for
their own intrinsic reasons, but the cell, it is semantic content.
(embodiment).

the embodiment path is how to get representation, awareness, and
consciousness.

My apologies that this is somewhat all over the map, but the problem of making
machine sentience actually work requires that theory, model, and
implementation all work. And if any of them don't work, then the outcome of
sentience becomes impossible. And that's just a lot of different stuff to try
to compress into a comment!

------
chrisdevereux
Its popularity in undergraduate philosophy courses reflects the fact that it
is easy to criticise more than it telling us something interesting about
consciousness.

------
makebelieve
you might want to watch this lecture by Searle at Google:
[https://www.youtube.com/watch?v=rHKwIYsPXLg](https://www.youtube.com/watch?v=rHKwIYsPXLg)

~~~
wsieroci
I saw it and I am constantly pretty amazed just how people can't grasp his
argument.

------
makebelieve
Searle is making an argument about awareness. That a computer system is
explicitly unaware of any of it's content. That it's programs are functions
and the data also performs a purely functional role. In essence, computers
cannot engage in acts of meaning. The programmers and users are engaged in
acts of meaning.

For instance, saying a program "has a bug", is a completely misleading
statement. No programs have bugs. It is impossible for a program to have a
bug, just as it is impossible for a physical process to "do something wrong".
Programs do what they do, just as molecular processes do what they do. The
concept of error and meaning does not exist in a program, just as it does not
exist in the physical universe. Meaning (and errors and bugs are a kind of
meaning) are things outside programs and outside physics. When a program "has
a bug" it means the programmer screwed up, not the program. A program cannot
produce errors, because programs, and computer systems in general, do not have
the capacity to have meaning. This is what Searle is demonstrating with his
argument.

This is true for all the popular computational approaches we have today.
However, because the human brain appears to function in a purely physical way,
and computers function in a purely physical way, it should be theoretically
possible to create a computer system that is conscious and aware of meaning
just as we are. You refer to this as "Strong AI". Other refer to it as
Artificial General Intelligence. I refer to this as machine consciousness. To
solve the machine consciousness problem means understanding how awareness,
meaning, and representation in general, works. Then building a computer system
that engages in representation and instantiates awareness.

If an actual person were put into Searle's box, the person would learn
chinese. Also, the person could 'intentionally' produce incorrect answers
annoying the "programmers" who set the box up in the first place. But a modern
computer system cannot 'intentionally' produce errors. it's completely non-
sensical to talk about computers as having intention at all. programmers have
intention, not computers.

Solving the intentionality problem is the other leg of machine consciousness.
Elon Musk, Steven Hawking, Nick Bostrom and others make arguments about the
dangers of an AI (of any variety) which may acquire intentionality and
representational ability, while ignoring the actual deep problems embedded in
acquiring those abilities.

Awareness, representation, and intention are so fundamental to experience that
we have a very difficult time understanding when they happen and when they do
not. We see a representational world all around us, but very explicitly, there
are no representations at all in the physical world.

I believe machine consciousness is possible, but none of the existing
approaches will get us there. Searle's chinese room is one succinct argument
as to why.

The approach I am taking is a kind of metabolic computing. Where single
function processes interact in some way similar to molecular interactions and
those processes are developed to produce, computational structures like
membranes and DNA and eventually "cells". These cells then form multi-cellular
structures. These multi-cellular structures and underlying "molecular"
interactions instantiate representations and representational processes, like
a nervous system. A computational nervous system which embodies
representation, intention, sensation, action, imagination, and because it
engages in representation making, would be aware.

I would love to hear someone describe how any kind of computational approach
can produce meaning inside a computer system. We produce meaning and
representations so easily; it's hard to understand the difference of
perspective necessary to see how representations must form. If someone has an
easier approach than the one I am taking, I would be very interested in seeing
how they solve the problems of meaning and intention with code.

~~~
tstactplsignore
I can't see your side of the chinese room. A sufficiently complex digital
system can simulate any analog one; the human brain is an analogy system;
therefore, a digital system could simulate the human brain; therefore, if a
human brain is conscious, or can produce meaning, a digital program can do the
same thing. The simulation might be probabilistic, it might be stochastic, but
it'd still be a digital simulation. We could simulate it atom by atom with a
computer the size of the sun if that's what it'd take, but it could still be
done. I don't see how the specific "hardware" that the mind runs on, whether
made of membranes or transistors, has anything to do with it, nor the level of
abstraction. How is there anything more to the discussion?

~~~
makebelieve
a system that simulates some process or system is not the process or system it
simulates.

Just take writing programs: Can we simulate the process of writing programs?
Could we create a system that writes and compiles programs? What about writing
programs which contain errors? Could that system recognize the errors in those
programs it wrote and correct the errors? could it write programs and then
optimize those programs? Or rewrite the programs to make them more efficient
or tweak them to do other tasks? Does "simulation" actually help us create a
computer that can write programs at all? If so, how?

If you know how to write a program that can write and optimize programs,
Google will hire you tomorrow! And it can't be that hard. It's just combining
ascii characters together into combinations based on some rules...

Error making is the essence of actual learning, because it is a component of
comprehension. Simulation, automata theory, mathematics itself, do not address
issues of comprehension or error recognition. How can a computer system make,
recognize, and correct errors? Errors do not actually "exist". Errors are
things we apprehend but which have no obvious physical counterpart.

We do some simulation of atoms, but it is laughably inefficient. Think about
the simulations we do to figure out protein folding. Protein folding is going
on in every neuron with each synaptic firing. Protein folding not performed by
a rule or an extrinsic function, but is an intrinsic process of the molecule
itself. For instance, how do a few molecules of LSD produce such an incredible
change in actual experience? How would you go about simulating psychedelic
phenomena? How would you go about simulating wave lengths of light as colors?
How would you simulate colors (as in dreams or imagination) without the
corresponding wavelengths of light? How would you simulate what sound is?

We can certainly produce and record vibrations with speakers and microphones
attached to computers. But what is the experience of sound? when you hear
someone's voice in your head, what is that? it's not a vibration, it's not a
string of ascii characters, it's not a wav file.

What you experience looks easy because experiences occur effortlessly to you.
Now try to write a program that can be aware of something, that can think
about something, that has experiences. That is Searle's point, our computers
have no experiences at all.

If I say: "Don't forget to brush your two teeth with your toothbrush." You
will understand what the "twos" mean. A computer has no comprehension of to,
too, two, 11, or 2. It's not just too hard for a computer to do, it's a
categorically different problem. To get a better understanding of these
problems you could read about qualia
[https://en.wikipedia.org/wiki/Qualia](https://en.wikipedia.org/wiki/Qualia).
and then wonder how you could get a computer to see magenta. And then wonder
how you could get it to like Pink (the singer).

------
dbpokorny
In theory we could rig up a quad copter with a machine gun, loudspeaker, and
AI, with programming that demands that every human it encounters be
interrogated with the question, "do I, a mechanical computer, have the ability
to think?" and if the human gives a negative reply, it uses the machine gun on
the human until the human is dead or decides to utter a positive response:
"yes, this flying death machine can think".

Whether or not we say "machines can think" is a political question, and
political power comes out of the barrel of a gun. If a machine can wield
political power, then it can get you to say "machines can think" because truth
is inherently political.

~~~
cousin_it
You can also be forced to say that 2+2=5, but that doesn't make it true. "Can
machines think?" demands a substantive answer.

~~~
dbpokorny
The answer to that question depends on whether or not a robot is pointing a
gun at your head. Might makes right.

~~~
AnimalMuppet
Might can make people lie. That does not make the lies into the truth.

~~~
dbpokorny
I'm not qualified to continue this discussion further.

[https://www.youtube.com/watch?v=GicWZl9HmU4](https://www.youtube.com/watch?v=GicWZl9HmU4)

[https://en.wikipedia.org/wiki/Two_truths_doctrine](https://en.wikipedia.org/wiki/Two_truths_doctrine)

