Hacker News new | past | comments | ask | show | jobs | submit login

Tl;dr the bot doesn't actually find out the other is an AI, but happens to randomly comment 'You are a robot'. It's all Eliza-style non-sequiturs and canned responses. These bots don't even have internal state that would qualify as 'having found out something'. It's mildly amusing in a funny-coincidence sort of way and nothing more.



It would be great if I never had to see another eliza. If it can't string more than 2 utterances together, its not AI, it certainly doesnt pass the Turing test, and its a waste of time.


Do you actually remember what the Turing Test is? I think a lot of people classify it in their head as "a test to determine if an AI is smart" but that's an oversimplification; that's the goal, not the methodology. The test is whether someone talking to both a human and a computer can tell which one is the computer, or less strictly, whether a human can tell that they are talking to a bot.

It has turned out that in practice, bots that "can't string more than 2 utterances together" in fact can pass the (reduced) Turing test when put online and made available to random people. People have been seen to spend hours talking to these bots with no apparent sign that they know they are talking to a bot.

"Not AI" and "waste of time" I'll agree with, but "doesn't pass the Turing test" is much less clear.

(Many have observed how every time AI sort of creeps up on something we define it as not-AI, but in the case of conversational "AIs" it turns out that it really is the case that blindingly stupid programs can pass it. Full props to Turing for the idea, no sarcasm, great paper fully worthy of its historic status, but it hasn't turned out to be quite as powerful a discriminator as we might have hoped.)


Originally, as proposed by Turing, the test involved a computer, a man and a woman and the computer is trying to identify who was the man and who the woman, while the humans could try to deceive the machine.


I think you're confusing a couple different things. There was an old party game in Turing's time where someone conversed with both a man and a woman and tried to identify which was which, and Turing was inspired by this concept to devise a test where someone conversed with a human and a computer and tried to identify which was the computer and which was the human.


The parent is actually correct. From the original paper:

The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?

Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:

"My hair is shingled, and the longest strands are about nine inches long."

In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.

We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"

/end quote


You're missing part of the Turing test - the humans involved need to have an incentive to prove they are human. A bot that's indistinguishable from a typical youtube commenter doesn't clear that bar.


how bout this refinement - wouldn't pass the turing test if tested by me.


Why would it be great if you never had to see another Eliza?

-Eliza




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: