Hacker News new | past | comments | ask | show | jobs | submit login
Summary of 'Programs with Common Sense' (1959) by John McCarthy (jackhoy.com)
77 points by jackhoy on Feb 26, 2017 | hide | past | favorite | 5 comments



Ah, it seemed so simple back then. I took McCarthy's "Epistemological problems in artificial intelligence" at Stanford in the 1980s. Once, he described the missionary and cannibals problem. Then he set it up in a form where his circumscription approach would work, turned the crank on the formalism, and the answer came out. As he was setting up the problem in the correct form, I thought "This is where the miracle occurs".

The mid-1980s were the high point of trying to hammer the real world into first order logic. This was the era of "expert systems", followed by the "AI Winter". It turns out that expert systems are just another way to program. They're a domain specific language for a modest class of problems, mostly trouble-shooting and rule-driven decision making. The expert systems crowd was talking "strong AI real soon now", which was never going to happen with that technology.

Today, we may be going too far in the other direction, with model-less machine learning. Some of that is scary, such as self-driving using imagery only, trained from data recorded by human drivers. There's no geometric model there, just recognition of known objects.[1] This works great, until it fails badly.

That's what led Tesla's version of Mobileye into three crashes. It recognizes "car ahead". It recognizes road lines. It ignores objects, including cars, alongside the road. But it also ignores "car partially projecting into road ahead". Not good.[2]

You need to map obstacles geometrically, then classify them. "Car", "Pedestrian", "Bicycle", "Moving thing not unclassified". If it can't be classified, it's still an obstacle, and if it's moving, you have to assume its movements are not very predictable. This may result in annoyingly conservative driving around bicycles, skateboards, and deer. That's a good failure mode. Google/Waymo seems to do that.

[1] http://www.princeton.edu/~alaink/Orf467F14/Deep%20Driving.pd... [2] https://www.youtube.com/watch?v=fc0yYJ8-Dyo


> "It turns out that expert systems are just another way to program"

Yes I agree with this, if you are providing the expert system with premises, although not specific to a particular problem, then you are effectively programming it.

I wonder if we could discover and obtain premises through some automated system, then the deductive process could still apply, leaving us with an understanding of how the solution to a goal was reached via the deductive argument. Do you know if this has been attempted?


This seems to be relevant for premise selection https://arxiv.org/pdf/1606.04442.pdf but not found anything yet on premise collection


For [2] Tesla say they don't have enough data to know if the autopilot was actually engaged. It seems to be doing fine with little data in these videos: https://www.youtube.com/watch?v=BfOL7AxWicY





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: