John McCarthy came to my class at Stanford on Wednesday May 7. Here is a VERY rough transcript of the informal interview. It comes from my notes that I was taking/my memory. These are definititely not verbatim quotes from McCarthy.
-----Professor's Interview Questions-----
Q. Can Computers Think?
A. Thinking isn't one thing. It has many aspects. For example, computers have the ability to remember information and the ability to play games. Some aspects of thinking, we have not succeeded in. A notable examples is the analysis of situations. A computer cannot break a situation into parts, analyze the parts separately, and then combine the parts to come to a conclusion. A specific manifestation of this is the game "Go". This type of thinking is necessary in "Go", where it is not in Chess. This is why the best computers are as good as people in Chess, but the best computers are much worse than people in "Go".
Q. Is there anything in principle that would prevent a computer from thinking as a human would?
A. No
Q. Can computers know?
A. This is largely a question of definition. If a camera looked at a table, we could say it "knows" that there are four containers of liquid on the table (which was true).
Q. Is there any definition of "know" in which computers cannot succeed?
A. Well, I suppose the biblical sense.
Q. Ha, well, what makes you think that?
A. They don't satisfy the necessary axioms (laughter)
Q. OK, can a computer have free will?
A. In my paper over free will, I defined "simple deterministic free will," which a computer can have. In fact, modern chess playing computers have this. However, this is not always true for displays of artificial intelligence. Consider two optimal tic-tac-toe playing programs. The first evalutes future situations in order to choose the optimal solution. The other simply looks at the state of the board, for which there are only 3^9 possibilities, and picks a move from a lookup table. The first program exhibits simple deterministic free will, where the second program does not. A chess program cannot have a lookup table because the state is too complex. Thus quantitative considerations are important. Philosophers would have you believe that they are not. That a chess problem and a tic tac toe problem are equivalent. I believe quantitative considerations are important.
Q. Simple deterministic free will does not require that a computer know that it has free will. How would a computer know that it has free will?
A. Well, computers are good at understanding theories. My theory of simple deterministic free will is a theory. You could teach it this theory.
Q. Are there some senses of free will that aren't simple deterministic?
A. (I didn't catch the first part of his response) Some believe that free will is acheived through random aspects of quantum mechanics. This is particularly attractive to people who don't understand quantum mechanics.
Q. Can computers achieve consciousness?
A. Human consciousness starts with being aware of basic things such as hunger. Advanced states of consciousness are simply more elaborate forms of these basic awarenesses. We have a surprisingly limited ability to examine our own state. We ought to remember what we've had for breakfast for the past 30 days, but we can't. Short answer -> yes, machines can have consciousness.
-----Student Questions-----
Q. Why would we want to give computer's emotions?
A. Human emotion involves the state of the blood, and this is inherited from our animal ancestors. Giving a computer this kind of emotion, or "state of the blood", would not be to our advantage.
Q. (Something that led him to talk about his new language Elephant)
A. Elephant was meant to come out in 2005, but 2005 has come and gone and the language isn't ready yet. It is a new way to talk to computers. I/O is done through speech acts. (He said something about the programming language dealing in obligations and promises, and I'm not sure what that means)
My Q: While we're on the same topic of computer languages, would you consider Lisp more of an invention or a discovery?
A. If I hadn't come up with it, someone else would have. Pure Lisp was a discovery, everything that has been done with it since has been an invention. It started out as a formula for conditional expressions (if c then a else b). The logical structure followed from that. I got the idea from Newell and Simon. They came out with a language called IPL in 1956. I heard about it, and thought it was a fascinating idea. I saw the language and thought it was horrible.
Q. What is the future of AI?
A. Well, I'm really hoping the next great idea will appear soon. Yoav (our professor) is probably too old (laughter). I will tell you this: If you go to my web page and look at me when I did most of my initial work, I wasn't much older than you looks expectantly around the room...
--------
Overall, it was an amazingly interesting talk. I'm not sure how well I captured that here. I wish I could have asked him more technical questions, but we were out of time. The best part, and probably one of the highlights of my freshman year at Stanford, was after class. My professor asked me if I had any experience editing wikis, and I said yes. He then asked me if I would mind helping McCarthy edit his wikipedia page and I said "sure", and I'm pretty sure my voice squeaked a little. A few minutes later, I was behind a computer, and John McCarthy was over my shoulder telling me what to add to HIS wikipedia page. I tried to stay and talk afterwards, but was shooed away.
(That's an interesting question: what makes a table a table? Not all tables are made out of the same material or have the same color. Not all tables have four legs (or legs at all!) and not all have a flat surface. Not all tables are the same height or width or are used for the same purposes. What, then, makes some particular table "a table"?)
We don't say that a camera "knows" there's three glasses of water on the table when it takes a picture any more than we say a newborn baby "knows" when he looks at a chessboard that Kasparov has a mate in three.
I guess I shouldn't take McCarthy's comments at a freshman seminar as representative of his most thorough theories on AI, but I found that one answer particularly naive.