

Why Philosophers Should Care About Computational Complexity [pdf] - fbrusch
http://www.scottaaronson.com/papers/philos.pdf

======
bmh100
For me, the key insight came from the following quote regarding Searle’s
Chinese Room argument [1]:

*> And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway: something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and all sorts of other internal furniture that we would expect to find in a mind. Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance.

This captures what I had intuitively thought regarding the link between
computation and consciousness: consciousness is some sort of sophisticated,
elegant computation embodied in human physiology.

[1]:
[https://en.wikipedia.org/wiki/Chinese_room](https://en.wikipedia.org/wiki/Chinese_room)

~~~
ZeroFries
This is the first good refutation of the Chinese room I've encountered. This
was never mentioned in the "Minds, Brains, Machines" course I took, so I don't
think it's talked about as much as it should be among philosophy of mind
circles?

~~~
throwawayaway
The whole Turing Test thing is massively overblown. It's a thought exercise
and not a particularly great one. It's gotten completely out of hand, with
Kurzweil, Chomsky, Searle etc all weighing in.

Computer passes Turing Test, Human fails Turing test - vice versa - that's all
there is to it.

~~~
arvinjoar
Yeah well, context is king. The aim of the Turing Test is to say that judging
machines based on their internal structures isn't very fair, since that's not
how we judge humans. There's no real way of knowing about other people's
"qualia" or whatever you want to call it. To skirt that problem, Turing
suggested a behavior-based approach.

I think we probably judge others to be conscious actors in more ways than this
though. We probably accept that other people have "qualia" from the fact that
we have it, and then we kind of learn that our parents have it, and so on,
until it's not a huge leap to generalize that to all of humanity. Of course
it's not like people have always assumed the same kind of "qualia" for all of
humanity, in a lot of conflicts this has probably been thrown out of the
window so it will be easier to kill your enemies. Either way, it's probably
quite complex how we determine that other people are conscious, Turing tried
to skirt the metaphysical issue (that we can never really know, this goes for
both humans and other machines). I think this was a huge step in the right
direction.

~~~
throwawayaway
"Computing machinery and intelligence" wherein the test is outlined, with
background. aka "The Imitation Game"

[http://cogprints.org/499/1/turing.HTML](http://cogprints.org/499/1/turing.HTML)

> The aim of the Turing Test is to say that judging machines based on...

That wasn't the aim at all, again I think you are reading way too much in to
it. The aim was to pose a test that would fool people.

Alan Turing:

"an average interrogator will not have more than 70 per cent chance of making
the right identification after five minutes"

"The original question, "Can machines think?" _I believe to be too meaningless
to deserve discussion_. Nevertheless I believe that at the end of the century
the use of words and general educated opinion will have altered so much that
one will be able to speak of machines thinking without expecting to be
contradicted."

The test is a very simple thought exercise where he shows a machine will be
able to pass for a human, and a human will fail to identify a machine. That's
it. No questions of consciousness, metaphysics! In fact he dismisses such
questions in the paper.

His belief was that in 50 years time, the words may have changed, and people's
opinions about what constitutes thinking would have changed to the point
where: someone could say "a computer thinks" without challenge. That hasn't
happened, when somebody says "a computer thinks" they are saying it with
tongue firmly in cheek. Maybe in another 65 years time his belief will come
true.

~~~
arvinjoar
You're essentially saying what I was saying. That Turing tried to throw
metaphysics out of the window in favor of a practical approach (behavior-based
= fooling a human).

As for public perception of consciousness or "thinking", time estimations are
usually quite sketchy.

~~~
throwawayaway
Nope. We disagree fundamentally on the aim of the Turing test. Sure we agree
things were thrown out the window, we agree it's a practical test. You think
its aim is a fair way for judging machines to be conscious.

I think it's a thought experiment to show that machines are going to be able
to fool an average human for some small length of time eventually. That's a
lot less grandiose.

A mistaken interaction with an IRC bot or an automated phone service has the
same net effect.

~~~
arvinjoar
Hm? "I think this was a huge step in the right direction." I guess what I am
arguing is, that if there's _any_ fair test, it has to be practical and not
metaphysical, that's why I think the Turing Test is a step in the right
direction. Do I make the claim that the Turing Test as-is is fair when it
comes to determining "consciousness" by any reasonable and practical manner?
No.

If I am reading what you are saying correctly, I surmise that you're saying
that the Turing Test doesn't say anything about consciousness, but rather, is
just about the probability of a machine being able to fool a human. I am
saying that the reason why Turing even ponders the question is that there's no
good way of definitely answering if a machine (or another human for that
matter) is conscious or not. So that leaves the ability of being able to fool
(or convince) other thinking machines that one is intelligent (or conscious)
as the only viable metric.

I feel like most of this is mainly semantics. I too, don't believe that one
can actually determine the consciousness of anyone but oneself. However, we
_do_ convince ourselves that other humans _are_ in fact conscious, so it's
still interesting to try to figure out what it would take for us to do that,
and then apply the same standards to machines.

------
diego898
This is my favorite Scott Aaronson piece! It has been submitted to HN before,
though quite some time ago. The discussion on that one is worth referencing
here as well:

[https://news.ycombinator.com/item?id=2861825](https://news.ycombinator.com/item?id=2861825)

------
raincom
It is like saying "Philosophers should care about biology or about some other
domain X". If you look at the history of "Philosophy of sciences", most of
them came from the object-level domains. In other words, many philosophers of
sciences came trained from domains such as physics, etc.

Pierre Duhem was a physicist. But he also contributed to philsophy of
scineces: check the famoous Duhem-Quine thesis. Same thing happend to
"philosophy of biology". If you look at the journal "philosphy of biology",
most of the early contributors came from biology.

I think, Scott Aaronson is aware of the controversy between Oded Goldreich and
Neal Koblitz. This controversy could have been formulated better if people in
theoretical computer scince had mastered "philosophy of sciences.

People in Artificial Intelligence (esp of CS variety) make wild claims like AI
annihilating the human race. And these voices have strong backing from people
like Bill Gates. However, these voices don't look at what philosophers have
writtern about AI, and the kind of nonsense AI folks spew. Yes, AI can solve
particular tasks (a chess game, autonomous driving): but can it replace
humans? Yes, say the AI group; No, say the philosopher Hubert Dreyfus. Check
the latter's book: What computers can't do, a critique of artificial reason.

------
talles
I love when computing mixes with philosophy, despite being unable to grasp
most of it.

Take for instance Rich Hickey's _Are We There Yet?_ [1], the Alfred North
Whitehead quotes are great food for thought: you _dive_ into it just as much
you are able to _swim_.

[1]: [http://www.infoq.com/presentations/Are-We-There-Yet-Rich-
Hic...](http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey)

------
theVirginian
As someone with a major in philosophy, I can attest that it is in fact
something we talked about often.

~~~
tjradcliffe
The difficulty is that if you get too deep into this stuff you stop doing
philosophy and start doing something useful. Philosophers have to be very
careful these days or they'll get dragged out of the tiny circle of darkness
they insist on occupying.

There is a place for philosophy, but it shrinks every time a new discovery is
made. It hasn't been quite as roughly treated as theology has, but the lag is
only about a century. The vast majority of what philosophers once spent their
time talking about is now part of some empirically grounded special science,
where making stuff up and asserting broad foundational claims without proof or
testing is no longer viable.

Linguistics, in particular, has carved off large chunks. Physics likewise: no
fun speculating on the nature of space and time when we can pin them down
pretty precisely. Various combinations of sociology, psychology, economics and
political science have heavily encroached on moral philosophy.

This is not to say that the world can't always use a few philosophers to poke
around in the darkness and perhaps cross the boundaries of other fields to
good effect, but the majority of the work that was once their exclusive
preserve is now almost always better handled by people with actual, detailed,
empirical knowledge of the subject.

There is no value in privileging the human scale or human imagination, and
fortunately no need to live within the artificial restrictions that
philosophers place on themselves, particularly their refusal to do experiments
or even make systematic observations to test their ideas, preferring instead
to rely on "what just makes sense", because that has always worked so well in
the past. Insisting on empirical testing is in no way limiting because ideas
that cannot be tested are irrelevant, as they must not have any impact
whatsoever on any aspect of existence (if they did, observing and/or
manipulating that aspect of existence would allow the ideas related to them to
be tested.)

Philosophy does teach a certain kind of rigour in thinking, because without
either empiricism or mathematical deduction to fall back on philosophers have
to be extremely meticulous to avoid falling off the real axis entirely, but
that starts to look more like a skill that ought to be taught to everyone
rather than the basis for an entire academic discipline.

~~~
yodsanklai
> There is a place for philosophy, but it shrinks every time a new discovery
> is made.

How do you define philosophy? can't we see science as as subset of philosophy?

~~~
PhilosVirginian
Philosophy of science is something you might be familiar with. It takes a
tangential approach to science and it is highly frowned upon in the philosophy
community to make judgements about science directly. Instead the idea is to
reason about the scientific process intead of trying actively to define it.
Let the scientists do their job, then as a philosopher you try to construct an
intellectual framework to make sense of what they are doing. The realism-
antirealism debate is an example of that today. People argue about whether the
science being done at CERN, and the models they produce to explain the
observed phenomena, are explaining the "real" world as it is, or if they are
simply convenient mathematical theories that attempt to explain empirical
phenomena. There is no debate over wether or not they are doing valid science,
the debate is over what the process they engage in and the conclusions they
draw imply for our understanding of reality, our experience, and what it means
to attain knowledge. The people debating these issues are by no means removed
from the scientific community either. Most professors who write these papers
either are physicists themselves or work closely alongside them and have
intimate understandings of the science and math before they start to write.
The papers are themselves targeted at other physicists and philosophers with
understandings of these topics far more advanced than my own.

I find it unfortunate when people like the commentator above make the claims
that they do because it detracts from the respect that the field of philosophy
deserves. It may not have the most obvious practical applications but it
deserves attention and some of it is really mind-bending stuff. When people
like Steven Hawking shit on the subject it's really a shame. They are causing
themselves to miss out and they are driving potential great thinkers away.

------
azeirah
Can someone explain to me the zero-knowledge proof example problem discussed
on page 37, given two graphs G and H, prove that they are NOT isomorphic?

"But as noticed by Goldreich, Micali, and Wigderson [65], there is something
Merlin can do instead: he can let Arthur challenge him. Merlin can say:"

Arthur, send me a new graph K, which you obtained either by randomly permuting
the vertices of G, or randomly permuting the vertices of H. Then I guarantee
that I will tell you, without fail, whether K ∼= G or K ∼= H.

I don't understand what it means for a graph to be isomorphic to another
graph, and I do not understand how Merlin's challenging answer can provide a
(zero-knowledge) proof to the given problem either.

~~~
Donwangugi
An isomorphism between two graphs indicates that bijection exists between two
graphs, or that there exists a method to permute one graph's vertices so that
they are equal to another one.

What Merlin is saying to Arthur is that no matter what kind of graph K Arthur
comes up with, it cannot be congruent to both G and H, or, that there exists
no method to create a graph that is both G and H.

Lastly it is zero-knowledge because Arthur learns nothing of the relation
between G and H, only that the K he computes is not both of them
simultaneously.

Not sure if that helps.

------
diego898
Scott just gave a fantastic Nautil.us interview! Just placing here for
completions sake: HN link [1] and direct link [2]

[1]:
[https://news.ycombinator.com/item?id=9074033](https://news.ycombinator.com/item?id=9074033)
[2]: [http://nautil.us/issue/21/information/ingenious-scott-
aarons...](http://nautil.us/issue/21/information/ingenious-scott-aaronson)

------
sinwave
Incidentally I'm reading Sipser's book for a philosophy course right now.

------
ExpiredLink
> _The purpose of this essay was to illustrate how philosophy could be
> enriched by taking compu- tational complexity theory into account, much as
> it was enriched almost a century ago by taking computability theory into
> account. In particular, I argued that computational complexity provides new
> insights into the explanatory content of Darwinism, the nature of
> mathematical knowledge and proof, computationalism, syntax versus semantics,
> the problem of logical omniscience, debates sur- rounding the Turing Test
> and Chinese Room, the problem of induction, the foundations of quantum
> mechanics, closed timelike curves, and economic rationality._

O.M.G. Yet another natural scientist trying to coerce philosophy to his 'way
of thinking'.

~~~
j2kun
Aaronson studies theoretical computer science, which is a subfield of
mathematics. There is no coercion in his article. He is pointing out that
while philosophers like to discuss the philosophical implications of
computing, they tend not to discuss the more recent developments in the field
that (he argues) would aid them in eliminating faulty arguments and clarifying
terms like "artificial intelligence."

