Hacker News new | past | comments | ask | show | jobs | submit login

About the whole "machines can think" debate, I love this quote by EWD:

"Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim."

(http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EW...)

Let's not fall prey to the temptation of thinking:

  Turing believes machines think
  Society acted terribly towards Turing
  Therefore machines do think.
EDIT: formatting.



It seems obvious now that:

    Turing was a machine
    Turing could think
    Therefore machines can think
Of course, whether Turing machines can think is still more of an open question, although it seems unlikely that the human brain is doing anything a Turing machine can't simulate, at least in theory.


Turing in fact dismisses the question in the famous letter. The Turing test isn't proposed as a test of whether machines can "think" but a test of whether they can imitate a person.

Turing considered the question of whether machines could think as being "too meaningless to deserve discussion".

http://books.google.com/books?id=CEMYUU_HFMAC&pg=PA317&lpg=P...

Also

http://plato.stanford.edu/entries/turing-test/

Sorry this links quote Turing, I couldn't find the original source.


So while we're at that, how do the proponents of Turing test respond to the lookup table argument? Say I make a giant lookup table, with keys being all possible conversation prefixes and values being the answers. E.g., h["Hi"] = "Hi.". h["Hi. / Hi. / How are you?"] = "I'm OK, how are you?" etc. Sure the lookup table would be big but definitely finite, since its size is bounded by the maximum possible length of a Turing test conversation (which is at most the length of a human life--we would like humans to pass the test, right?) Will we be morally obligated to grant such hash table the same rights a person enjoys?


http://www.scottaaronson.com/papers/philos.pdf , section 4. I consider that to be essentially the final word on the topic; it is an objective, mathematically, philosophically, physically meaningful distinction drawn between a lookup table and a computing machine.


Thanks for the link. I just read section 4 and I don't really see how it addresses the argument. It mostly seems to argue against the opinion that computer programs cannot be sentient. My point is much weaker: that it is manifestly absurd to insist that all computer programs which can pass the Turing test must be presumed sentient.

The article acknowledges that a lookup table could pass the Turing test--the author even uses this argument for his own ends. At the same time, clearly he doesn't think we should presume the lookup table sentient. The only passage which might be interpreted as an "objective distinction" is this one:

Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient. Furthermore, the latter program would be sentient because it was polynomially-bounded.

And yet in the next paragraph the author says he's reluctant to stand behind such thesis.

Do you, unlike Scott Aaronson, want to adopt this amended postulate--i.e., do you believe all computer programs which can pass the Turing test should be granted personhood as long as they scale polynomially with the length of the conversation?


Hypothesizing about the lookup table is an irrelevant question, because no such thing can exist in the real universe. Not even in theory. Moreover, your lookup table contains some amount of information in it; either it was generated by a relatively small polynomial process, in which case for as large as the lookup table appears to be, it actually isn't, and is indistinguishable from simply using the program that was used to generate it, or it does indeed contain exponentially large amounts of information, in which case hypothesizing an exponentially large source of information for the mere purpose of passing a Turing test is a bizarre philosophical step to take. Where is this exponentially large source of information?

Recall that in information theory, being a bit sloppy but essentially accurate with the terms, the amount of information something has can be expressed as the smallest possible encoding of something. The entire Mandelbrot set, as gloriously complicated as it may look, actually contains virtually no information, a mere handful of bits, because it's all the result of a very simple equation. No matter how gloriously complicated your enormous hash table may look, if it was generated with some humanly-feasible program and nigh-infinite amounts of time, the information content of the entire table, no matter how large, is nothing more than the size of the program and perhaps a specification of how long you let it run.

Basically, the whole "big lookup table" has to have some sort of characteristic to it. Either it was created by a program, in which case the program itself could pass the Turing test, or it is somehow irreducible to a program's output, in which case in your zealous effort to swat a fly you vaporized the entire Earth; you can't agree to the possibility a machine might pass the test (or be sentient or whatever) but you can agree to the existence of an exponentially complicated source of information? That's only a gnat's whisker away from asking "Well, what if I use magic to create a philosophical zombie," (I'm referencing the specific concept of a philosophical zombie here, you can look it up if you like) "that looks like it's passing the test but it really isn't, what then?" Well, I don't know, when you're willing to invoke magic in your defense I'll concede I haven't got much of a response, but you probably shouldn't call that a victory.

The lookup table argument only makes sense nonconstructively, if you merely assert its existence but then don't allow anyone to ask any question about where it came from, or what properties it has.


So pretty much your argument boils down to, "such a lookup table cannot exist, therefore any argument using it is irrelevant." Note that even Scott Aaronson disagrees with you in the article you cited.

If we were having this debate in the XVIII-th century, you could equally as well assert that any machine capable of playing chess should be considered a person. The motivation is exactly the same as with the Turing test: so far only humans can play chess, humans are sentient, QED.

Say someone said, "But what if a machine used a minimax algorithm." To which you could respond, armed with your knowledge of XVIIIth century technology, "Such thing cannot exist therefore it's an irrelevant question."

As for the creation of such a table, not that I consider that question particularly relevant, but here it is: Say a crazy scientist in the future created a program that actually simulated a human brain, then ran it (on future super-fast hardware) on every possible input (once again, the size of the input is bounded by the maximum length of a conversation a human can have), and stored the results on future super-large hard drives. Then he deleted the human brain simulation program, and gave you just the lookup table. The act of deleting the original program we may very well consider murder. But what about the generated lookup table?


> but a test of whether they can imitate a person.

Seems more that he didn't want to argue about definitions, but wanted to phrase it in such a way that anyone denying it would be no different than a solipsist: I think, therefore I am, but I don't know about the rest of you guys.

There are problems with the test (possibly like the very large lookup table argument), but it was a good approach overall.


As far as I have always understood it, Turing was really just asking the question of those who don't believe in strong AI: "Suppose a machine did pass the Turing test, what extra thing would you expect of it before you would be willing to say it was capable of thought?"


I prefer "Can planes fly?"

The Church-Turing-Deutsch Principle seems pretty uncontroversial when we ignore quantum mechanics (and remains questionable when it is considered). Add to this the lack of any evidence for any irreducible quantum requirements in the mind anyway and it seems pretty damn likely that the mind is algorithmic.

Even if the CTD Principle does not hold for whatever reason, that still does not put the mind and machines on different levels. It would merely imply that machines more powerful than Turing machines can be built (the mind being an example of such a machine). Not even Roger Penrose disputes a materialistic mind.


I didn't interpret EWD that way. It's not that machines don't think, but instead that 'think' is too ill-defined to be meaningful in the same sentence as 'machines'. Turing wasn't wrong to believe what he did, but perhaps at that point in time 'machines' did not mean something so precise as it does today.


I don't think anyone is 'falling prey' because of what happend to Turing. What evidence is there for this?


For example, consider this blog post: http://www.scottaaronson.com/blog/?p=63

Now ask yourself: how on Earth did the author consider the information about Turing's persecution relevant to the subject at hand?


    Turing thought they think
    And he suffered terribly
    Therefore they do think




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: