Hacker News new | comments | show | ask | jobs | submit login
Tell YC: Answers from John McCarthy
136 points by mgummelt 3420 days ago | hide | past | web | 72 comments | favorite
John McCarthy came to my class at Stanford on Wednesday May 7. Here is a VERY rough transcript of the informal interview. It comes from my notes that I was taking/my memory. These are definititely not verbatim quotes from McCarthy.

-----Professor's Interview Questions-----

Q. Can Computers Think?

A. Thinking isn't one thing. It has many aspects. For example, computers have the ability to remember information and the ability to play games. Some aspects of thinking, we have not succeeded in. A notable examples is the analysis of situations. A computer cannot break a situation into parts, analyze the parts separately, and then combine the parts to come to a conclusion. A specific manifestation of this is the game "Go". This type of thinking is necessary in "Go", where it is not in Chess. This is why the best computers are as good as people in Chess, but the best computers are much worse than people in "Go".

Q. Is there anything in principle that would prevent a computer from thinking as a human would?

A. No

Q. Can computers know?

A. This is largely a question of definition. If a camera looked at a table, we could say it "knows" that there are four containers of liquid on the table (which was true).

Q. Is there any definition of "know" in which computers cannot succeed?

A. Well, I suppose the biblical sense.

Q. Ha, well, what makes you think that?

A. They don't satisfy the necessary axioms (laughter)

Q. OK, can a computer have free will?

A. In my paper over free will, I defined "simple deterministic free will," which a computer can have. In fact, modern chess playing computers have this. However, this is not always true for displays of artificial intelligence. Consider two optimal tic-tac-toe playing programs. The first evalutes future situations in order to choose the optimal solution. The other simply looks at the state of the board, for which there are only 3^9 possibilities, and picks a move from a lookup table. The first program exhibits simple deterministic free will, where the second program does not. A chess program cannot have a lookup table because the state is too complex. Thus quantitative considerations are important. Philosophers would have you believe that they are not. That a chess problem and a tic tac toe problem are equivalent. I believe quantitative considerations are important.

Q. Simple deterministic free will does not require that a computer know that it has free will. How would a computer know that it has free will?

A. Well, computers are good at understanding theories. My theory of simple deterministic free will is a theory. You could teach it this theory.

Q. Are there some senses of free will that aren't simple deterministic?

A. (I didn't catch the first part of his response) Some believe that free will is acheived through random aspects of quantum mechanics. This is particularly attractive to people who don't understand quantum mechanics.

Q. Can computers achieve consciousness?

A. Human consciousness starts with being aware of basic things such as hunger. Advanced states of consciousness are simply more elaborate forms of these basic awarenesses. We have a surprisingly limited ability to examine our own state. We ought to remember what we've had for breakfast for the past 30 days, but we can't. Short answer -> yes, machines can have consciousness.

-----Student Questions-----

Q. Why would we want to give computer's emotions?

A. Human emotion involves the state of the blood, and this is inherited from our animal ancestors. Giving a computer this kind of emotion, or "state of the blood", would not be to our advantage.

Q. (Something that led him to talk about his new language Elephant)

A. Elephant was meant to come out in 2005, but 2005 has come and gone and the language isn't ready yet. It is a new way to talk to computers. I/O is done through speech acts. (He said something about the programming language dealing in obligations and promises, and I'm not sure what that means)

My Q: While we're on the same topic of computer languages, would you consider Lisp more of an invention or a discovery?

A. If I hadn't come up with it, someone else would have. Pure Lisp was a discovery, everything that has been done with it since has been an invention. It started out as a formula for conditional expressions (if c then a else b). The logical structure followed from that. I got the idea from Newell and Simon. They came out with a language called IPL in 1956. I heard about it, and thought it was a fascinating idea. I saw the language and thought it was horrible.

Q. What is the future of AI?

A. Well, I'm really hoping the next great idea will appear soon. Yoav (our professor) is probably too old (laughter). I will tell you this: If you go to my web page and look at me when I did most of my initial work, I wasn't much older than you looks expectantly around the room...

--------

Overall, it was an amazingly interesting talk. I'm not sure how well I captured that here. I wish I could have asked him more technical questions, but we were out of time. The best part, and probably one of the highlights of my freshman year at Stanford, was after class. My professor asked me if I had any experience editing wikis, and I said yes. He then asked me if I would mind helping McCarthy edit his wikipedia page and I said "sure", and I'm pretty sure my voice squeaked a little. A few minutes later, I was behind a computer, and John McCarthy was over my shoulder telling me what to add to HIS wikipedia page. I tried to stay and talk afterwards, but was shooed away.




   Q. Can computers know?

   A. This is largely a question of definition. If a camera
   looked at a table, we could say it "knows" that there are
   four containers of liquid on the table (which was true).
This is definitely a very old school approach to AI and one that I don't find very convincing. If a computer with a camera is looking at a table with four containers of liquid, to say that it "knows" "there are four containers of liquid on the table" presupposes that it "knows" what a "container" is, what a "liquid" is, and what a "table" is, and that it can recognize and point out each of these things in its 640x480 grid of RGB values and describe the essential properties of a container, a liquid, and a table. Even that presupposes that things like "container", "liquid" and "table" have "essential properties".

(That's an interesting question: what makes a table a table? Not all tables are made out of the same material or have the same color. Not all tables have four legs (or legs at all!) and not all have a flat surface. Not all tables are the same height or width or are used for the same purposes. What, then, makes some particular table "a table"?)

We don't say that a camera "knows" there's three glasses of water on the table when it takes a picture any more than we say a newborn baby "knows" when he looks at a chessboard that Kasparov has a mate in three.

I guess I shouldn't take McCarthy's comments at a freshman seminar as representative of his most thorough theories on AI, but I found that one answer particularly naive.


I don't think this example represented his definition of "knowing". He was using it to show that there are very basic definitions of the word.


One can only express so much in a short sentence, but McCarthy's reply does sum up one version of a traditional cognitivist understanding of perception. That is the idea that knowledge is raw data, and that thought is processing. It seems to me that much of traditional AI is unfairly dismissed, just as behaviorism was largely unfairly dismissed by the AI people, such is the peril of science being ruled by trends in the absence of strong findings (real science as in physics for example, the paradigm of real science).

But I also think there are some very important ideas from the more recent work that began with connectionism and led to embodied/ enactivist approaches. That has led to a definition something along the lines of knowing being an organism's ability to interact effectively with its environment (which would involve being able to predict correctly the results of actions performed on the object). So that would imply that both the organism and the environment are involved in the knowledge.

The camera has no knowledge of the table because it has not had the experience of lifting the table and feeling its weight, being aware of its ability to throw it (and how far), to set it down (and how its weight will affect how quickly it will hit the floor), its surface as a stable place for setting other objects, etc. All of these interactions lead to the perceptual skills necessary to know and understand the table, which is to say to have a trained neural network controlling, planning behavior, categorizing experience with these trained expectations.

(That seems to imply some guiding principles for implementing AI: 1. basic locomotion and physical interaction with the world is an important and non-trivial problem 2. there needs to be a linking theory to extend basic-level knowledge to novel, abstract categories of knowledge grounded in the earlier type. For 1. lots of neuroscience work is relevant including constructivist/ modeling approaches, including the behavior-based AI paradigm and for 2. one such major linking paradigm is the one that started with Rosch/ Lakoff /Faconnier/ Gibbs etc currently under the headings of conceptual metaphor and blending theory, cognitive linguistics, embodied cognitive science)


Knowledge about the world or some part of it potentially gives the ability to purposefully change it, or influence it if you wish. At least I'd be more comfortable with this definition of knowledge that distinguishes a camera from a human.


On an unrelated note, is anyone else having trouble reading this because of the dark-gray on light-gray background?


Yes. Unfortunately Hacker News looks like it was designed by hackers :[

But that's like the 10th time hard-to-read-text has happened to me today, and it gives me an idea.

On-the-fly, stored in the cloud site specific CSS customizations would be a killer feature for a Firefox extension. Something that useful must already be in Firebug ;) /goes to check



Personally, I find the design very elegant. I would hate to see changes (apart from the font color issue, if some readers have problems with that).


"Select All" should help. (I feel your pain though)


I find the contrast more of a problem on laptop screens.


In firefox, you can turn off page styles in "view". Same for opera. Not sure about safari.


I tried to stay and talk afterwards, but was shooed away.

That's too bad. I always noticed the distinction between professors who welcomed interaction with students and those who shooed students away. Stanford has a lot of the latter. (Update: my assumption is that this is truer in general of top-tier schools than average ones, probably because smart, keen students stand out more at average schools and so get more attention. Of course, average schools have fewer smart, keen teachers too. But if you connect with the right one, the experience can be life-changing.)


No no no. I shouldn't have said I was shooed away. I was actually invited to stay for a bit after class and talk, but I made a mistake. My professor asked me if I had a class to go to, and I told him that I did but I could skip it. He told me I shouldn't skip my classes, so THEN I had to leave. Stanford professors, even the nobel prize winners, are extremely open to talk to undergrads after class or during office hours.


"... I was actually invited to stay for a bit after class and talk, but I made a mistake. My professor asked me if I had a class to go to, and I told him that I did but I could skip it. He told me I shouldn't skip my classes, so THEN I had to leave. ..."

Unfortunate. Next time stand your ground. You might not get another chance to meet someone like McCarthy.


Ha. That's much better, and I'm glad to hear it. Incidentally, I think grad school and undergrad are different this way too.


Pissing contest, I know, but that doesn't necessarily hold at all top tier schools. At MIT I've found pretty much every professor to be willing to chat with students, whether just after class or at office hours. At the same time, I have noticed a general lack of students wanting to...


I think that a model of when students will talk with profs is the following :

"comfort level being outgoing" * "likelihood of doing research for fun" + "comfort level being outgoing" * "confusion over the days lecture"

this plus a little bit of reflection on the distribution of socialization styles amongst smart folks is a fairly complete model.


Some believe that free will is acheived through random aspects of quantum mechanics. This is particularly attractive to people who don't understand quantum mechanics.

This man is now my personal hero.

Suddenly I understand why so many bright people were drawn to work with McCarthy. :)


Ties in nicely with: "If you think you understand quantum mechanics, you don't understand quantum mechanics"


Um, no, that is a quantum mysterianism line.

This might have been true in the days before anyone had the concept of macroscopic decoherence (popularly known as many-worlds) but quantum physics is pretty normal these days. See the recent series at Overcoming Bias.


So, we have reconciled quantum mechanics and general relativity? We have put quantum mechanics on deterministic grounds? Many-worlds is accepted fact? Indeed I must check out overcoming bias....


Well said. I got the impression in this thread that some AI researchers are trying to beat physics into shape so that it would support certain views favorable to AI.

[This is not to say that those researchers are unintelligent, just that a certain bias may exist.]


I really liked this one:

http://www.overcomingbias.com/2008/04/on-being-decohe.html

Of course, I have a Ph.D. in semiconductor lasers, so I've been trying to think about such stuff off and on for years... which means I'm a terrible test case for articles like this. ;)



Check his submissions; overcomingbias.com is one of the links.


The man behind the handle is probably Eliezer Yudkowsky, a researcher at SIAI who, among other things, run the annual Singularity Summit. In case anyone wants to know.


Right. I was wondering why he appears to have two handles -- both using his name, both posting stuff related to his research.


I realized that after responding, my bad.


I don't think you can dismiss Sir Roger Penrose like that. He may be wrong, but it's not for lack of understanding of quantum mechanics.


Why on earth not? Because he's got a knighthood, and might smack me with a mace?

It's not as if there aren't many, many great physicists who have failed to understand quantum mechanics. Max Planck invented the field but never understood it. And Einstein, if he did indeed understand it, never stopped wishing that he didn't have to.

Anyway, I haven't read Penrose's arguments -- geez, I barely take time to read credible science these days -- so I'm not dismissing him: I'm outsourcing my dismissal of him to guys like Dennett. Meanwhile, I'll happily accept your assertion that Penrose understands quantum mechanics perfectly well even if he enjoys abusing it for philosophical kicks, kind of like how the screenwriters who wrote Gladiator claim to have actually read real Roman history before they reinvented it for the screen.


I'd just like to say that "I'm not dismissing him: I'm outsourcing my dismissal of him to guys like Dennett" is the best line I've read recently.

From now on I'm outsourcing all my dismissiveness to Daniel Dennett. I just don't have the time to be properly dismissive, and he's so much better at it than I am anyway.


Penrose’s books are well worth reading even if you disagree with the speculative bits. He is an eloquent writer, and most of it is lucid layman-accessible explanations of “credible science.”


Go find out who you're talking about.


In his books he is rather humble about his knowledge of the subject. Yes, he proposes this hypothesis and supports it, but stops way short of "i know more then the specialists". It's a lot more like "from what i know, it seems very probable".

I also think he's wrong, but at least he's not infatuated about it.

edit: I love his books. Well worth the read, and for those concerned he puts his speculations in chapters very clearly marked "speculations". From his books I first learned the details about Turing machines and lambda calculus, and a lot more.


The core of his argument is that humans can know truths that cannot be discovered computationally. If that is true then it implies two things 1. Conventional AI will never reach human levels of intelligence. 2. To understand how human intelligence works we need to discover new physics - most likely in the area of quantum mechanics.

It all hangs, not on quantum mechanics, but on his initial assertion. I've read his explanation in Shadows of the Mind, but the lightbulb didn't go on for me. For McArthy to dismiss the question offhand was a little disappointing. I'd expect him to have a deeper insight.


We as a species have a long history of explaining the unknown with magic. So we should tread carefully around this kind of explanations, simply because we should be aware of such a strong bias towards them.

What the quantum intelligence does is take one unresolved problem (how do we think) and replace it with another (we think with quantum computing, but we don't know exactly how). Its only result is to take away the unknown and replace it with an incomplete explanation.

I also don't think Penrose would have come to this conclusion now. Cognitive psychology has taken enormous strides in the past years, and it's already pretty clear it's on the right track. Take a look for example at "The Emotion Machine" by Marvin Minsky (who by the way is on par with McCarthy in AI but chose cognitive psychology as a main field: http://en.wikipedia.org/wiki/Marvin_Minsky).


I admire McCarthy a lot, but I'm with you there in being a bit disappointed. As for humans being able to know truths that cannot be discovered computationally, there is some evidence [1] supporting the hypothesis that animal brains do what they do using analogue information processing. So it may be that there are "thruths" which can only be "known" using analogue processes, in which case, as digital reasoners, computers would always remain at a disadvantage when compared against humans in terms of "intelligence".

[1] Spivey, M., Grosjean, M. & Knoblich, G. (2005). Continuous attraction toward phonological competitors. Proceedings of the National Academy of Sciences, 102(29), 10393-10398.


Hypothetically speaking, if we develop a sufficient understanding of analogue information processing, couldn't we build either (a) some kind of analog co-processor that operates in this manner and interfaces with the computer, or even (b) a sufficiently precise digital simulation of a system that can use analogue processes?


Maybe we can. I hope we'll invest in finding out, and soon. Then again, John McCarthy doesn't seem to agree, as this answer suggests:

Q. Is there anything in principle that would prevent a computer from thinking as a human would?

A. No

IOW, there's still no recognition today, on the side of the purveyors of "classical" AI, that anything except digital processing might be needed for a computer to think "as a human would". So the big money is likely to continue being thrown at attempts to emulate animal brains using purely digital means. And I suspect that these funds might largely be better spent elsewhere.


I tried to find a free PDF version of that paper, but no such luck. However, I found an earlier one by Michael Spivey and Rick Dale, ON THE CONTINUITY OF MIND: TOWARD A DYNAMICAL ACCOUNT OF COGNITION (59 pages, <http://www.cogstud.cornell.edu/spiveylab/PLM.pdf>;).


Can someone explain this? It seems the connection between quantum mechanics and free will are being dismissed out of hand. However, this seems like a deep question that probably has no simple answer. Is this just a 'religious' issue? Or is there a simple explanation why there is no connection here..


The "simple" explanation is that lots of people think that quantum mechanics is nondeterministic. And it isn't. If you know the quantum state of a system at time X you can figure out what it will be at any time after that.

People get confused because it's easy to look at a quantum mechanics problem from the wrong angle and see nondeterminism. We're really well trained in intuitive mechanics ("an electron is like a tiny tennis ball, and tennis balls are always either here or there, right?") so at first glance quantum mechanics seems wacky and random: the electron might be here, or it might be there, with equal probability, and we can't tell which! Whereupon your head explodes. Seriously: The discoverers of quantum exploded in horror, and they started ranting about that crazy cat in the box, or "God playing dice". (Colorful but misleading metaphors. One reason that Bohr and Einstein's early confusion persists today is that they were just so darned eloquent.)

In fact, God does not play dice: God sees all the outcomes of the dice roll at the same time and doesn't understand why we think it's a game, and not a static work of art.

People also think that nondeterminism is somehow an important ingredient in whatever it is we mean by "free will". This doesn't make much sense to me. If you want it to not make sense to you as well, read Daniel Dennett's Freedom Evolves. You might want to budget more than a couple hours for that book, though.


God does not play dice? Are you saying that you can figure out radioactive decay?

From [http://www.fourmilab.ch/hotbits/how3.html]

"But hidden variables aren't the way our universe works—it really is random, right down to its gnarly, subatomic roots. In 1964, the physicist John Bell proved a theorem which showed hidden variable (little clock in the nucleus) theories inconsistent with the foundations of quantum mechanics. In 1982, Alain Aspect and his colleagues performed an experiment to test Bell's theoretical result and discovered, to nobody's surprise, that the predictions of quantum theory were correct: the randomness is inherent—not due to limitations in our ability to make measurements. So, given a Cæsium-137 nucleus, there is no way whatsoever to predict when it will decay. If we have a large number of them, we can be confident half will decay in 30.17 years; but if we have a single atom, pinned in a laser ion trap, all we can say is that is there's even odds it will decay sometime in the next 30.17 years, but as to precisely when we're fundamentally quantum clueless. The only way to know when a given Cæsium-137 nucleus decays is after the fact—by detecting the ejecta. A Cæsium-137 nucleus which has “beat the reaper” by surviving a century, during which time only one in a thousand of its litter-mates haven't taken the plunge and turned into Barium, has precisely the same chance of surviving another hundred years as a newly-minted Cæsium-137, fresh from the reactor core."


Go read the overcomingbias essay on the Many-Worlds interpretation posted above. (Or is it below?)

The short answer is that in the many-worlds interpretation, this too is a deterministic process. The total wavefunction of the whole system (atom + observer) evolves deterministically. What isn't deterministic is "your" subjective view of it, but "you" only view a vanishingly small slice of reality.

Sorry, that's the best three-sentence explanation I can come up with right now, and I admit it's only a shade better than "trust me, I'm a physicist". But trust me, I'm a physicist.


I don't quite understand this God and dice explanation and what role God plays in it. Is spooky action at a distance really not spooky?

Edit. Nevermind, I see what you mean.


Basically he's saying that 'quantum mechanics' is the scientific substitute for 'magic'. Most people know jack about it, just that it's really hard and explains weird things.


Many of the people who ask this question don't really want an answer, because they're already committed to leaving "room" for an immaterial soul. So the simplicity of the answer depends mostly on the ideological commitment of the questioner.

I'm game if you are, though. You might start by thinking about a modern digital computer. It's made up of a very large number of very small devices (transistors) that sling populations of electrons back and forth. The brain is made of a much larger number of less-small devices called neurons, that sling bigger populations of bigger objects (organic molecules) back and forth. So according to the basic understanding of QM that tells us relative indeterminacy increases when considering smaller objects, the functioning of the human brain is marginally more predictable than the insides of a modern computer.


Because quantum consciousness is a 'special sauce' theory analogous to what vitalism was in biology - the idea that biomechanical processes are not enough to account for life and that some magical 'vital principle' is necessary. Fueled by people with an emotional agenda, that debate went on for decades before being thrown on the same garbage heap as geocentrism.


If I hadn't come up with it, someone else would have. Pure Lisp was a discovery, everything that has been done with it since has been an invention

For me , this pretty much sums it up--thank you for posting ;)


Elephant sounds interesting. More info here: http://www-formal.stanford.edu/jmc/elephant/elephant.html


Cool! Such experiences are awesome.

A. If I hadn't come up with it, someone else would have. Pure Lisp was a discovery, everything that has been done with it since has been an invention.

This brought back thoughts about Gladwell's recent essay which was posted here.


Although Gladwell would argue that Lisp, too, was as much an invention as anything done with it afterwards, insofar as the person who invented, say, CLOS was simply discovering it (in McCarthy's terms) before someone else did.


Probably a continuum from discovery to invention. I don't know CLOS, but it sounds like mostly an invention.

Aston - email me at info@reatlas.com please.


"We ought to remember what we've had for breakfast for the past 30 days, but we can't."

Bullshit - I've had Cocoa Puffs.


Good enough to drool over. But I'd really want to know what he thinks about production software, not just the cute standard AI/consciousness questions. Like how to make a better faster more fiable and more customizable ERP workhorse.


If only we knew more about how Spinoza made those lenses.


It's like asking Hawkins about black holes. These people have huge careers that are not limited to what every digger knows about them. For one thing, his most recent work - Elephant - is a lot more about production software then about AI, and for another they must be pretty bored to answer the same "what is consciousness" and "does light really disappear in a black hole" questions.

Also forgive me about wanting to be a better programmer, especially here on hacker news. What was I thinking.


That is a fair point. But I think Hawkins can probably tell us more interesting things about black holes than about, say, 101 tricks for quickly simplifying a differential equation. If there's a question that nobody but McCarthy could answer satisfactorily, McCarthy is the one to ask.


That was the reason for my original post. I think McCarthy can tell us a lot about production software too, probably a lot more then most. It's just nobody asks him, and instead we get the same answers we can find in his last 10 interviews.

I wouldn't mind AI questions too, if only they were a bit more original.


Had anyone else just assumed McCarthy was dead? Sorry John.


Out of curiosity, what class was this?


nevermind, you mention the professor and you're a freshman, so i assume it's cs21n.


Yea, CS21N. It's the least technical CS class you can probably take.


Mac users who actually want to be able to read the text: press ctrl+alt+command+8.


More exciting than P=NP : Does Turing Complete = Human Complete?


this should be on the front page


from your memory??


Humans have spontaneous thoughts, allowing us to build something in our mind first, and then trying to build it concretely.

(So we can think to build computers, but computers can't think to build humans...)

Neither animals nor computers will ever be able of that.

Although, I find his responses quite superficial sometimes.


"Spontaneous", huh? Does that mean it was truly random (e.g. that you're just as likely to think up a coherent sentence in Farsi as in English) or that you are not consciously aware of the process that leads to it? Either way, a computer can handle it: it can operate on outputs for which the inputs or processes are not known, or it can take some random inputs.


With 'spontaneous' I mean: not caused by anything. All really new ideas are spontaneous.

You find them (freshly created) inside yourself, but you can't tell where they were coming from.


(Sorry, but simply down-voting isn't 'good enough' here -- if you aren't able to provide some convincing argument, you just confirm my opinion.)


[deleted]


I don't think it is . . . I believe understanding the idea behind the answer hinges on the definition of "know".

He pointed that out in the answer.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: