Hacker News new | past | comments | ask | show | jobs | submit login

I think the problem has arisen from a misunderstanding of what Turing was getting at in his paper. At first he talks about Chess (an example chosen, as we know, because it has a well defined rule set) and a computer imitating a Chess player.

He makes the point that, to a human player, it may be difficult to determine between a human player and a computer imitator. Even though the computer is not AI or otherwise "intelligent" it could be mistaken, under the constraints of the test, as human.

Then he sets up the more complex test with the questioning (i.e. "convince me you are a man").

The point there is much the same; at some point it is possible to construct a machine that, within the constraints of the test, is functionally "human". He never claims it as an ultimate test for AI.

IMO the greater point he suggests, which always seems to get glossed over, is this: that at some level of complexity a computer will be able to pass a test (as yet undefined) in which it imitates complete human intelligence to such a degree it appears to be full AI.

That could all be rambling... but that is my understanding of his point.

Over the interim the Turing Test has gotten mixed around and confused to such a degree that this insight is forgotten.




The Turing test should introduce a completely unexpected set of questions. (double quotes for human, single for AI) "Hi" 'hi, how are you?' "Good. Do you know what we are doing today?" 'Yes, we are attempting to prove that I posses sentient intelligence' "Right. Would you like to prove that you are sentient?" 'Yes' "Excellent. I would like you to design a new five wheeled vehicle for me, can you do that?" 'Yes. Is autocad acceptable?' "Sure. Start with basics though, don't dive in, I'd like to see successful iterations and reasoning about design choices"

Something like that. Otherwise it's all just BS breadth first search through other people's past conversations.


You don't want an AI, you want an artificial genius who can handle any task.


I think the difference is undirected AI. We're getting quite good at defining a task, like voice recognition, and applying AI to solve it, but we still have basically no idea how to handle undirected human-like intelligence. And I think that's perfectly fine.


No, I mean my point is that any arbitrary human would have difficulty handling a random, untrained for technical task at the level of designing a vehicle in a drafting program like AutoCAD.

Maybe what you want to say is, "I would like you to spend 4 to 10 years learning how to build a 5 wheeled vehicle." Then maybe the AI comes back to you and says, "Is MIT an acceptable institution to learn these skills at?"


Fine, but even a child can start drawing a five wheeled car. It isn't AI if it couldn't at least handle the question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: