It has turned out that in practice, bots that "can't string more than 2 utterances together" in fact can pass the (reduced) Turing test when put online and made available to random people. People have been seen to spend hours talking to these bots with no apparent sign that they know they are talking to a bot.
"Not AI" and "waste of time" I'll agree with, but "doesn't pass the Turing test" is much less clear.
(Many have observed how every time AI sort of creeps up on something we define it as not-AI, but in the case of conversational "AIs" it turns out that it really is the case that blindingly stupid programs can pass it. Full props to Turing for the idea, no sarcasm, great paper fully worthy of its historic status, but it hasn't turned out to be quite as powerful a discriminator as we might have hoped.)
The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
"My hair is shingled, and the longest strands are about nine inches long."
In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
jabberwacky ==> WHat will happen with the oil spill in the gulf of mexico?
splotchy ==> tacos anyone!
I can't find a reference on-line, but this reminds me of my earliest exposure to an implementation of Eliza. It was written in BASIC in some 8-bit magazine back in the 80s. The article mentioned the history of chatbots and one bit was about a program named, I think, RACTOR, that was some kind of Markov chainer that was used to write fiction novels.
"It" even wrote a book called "The Policeman's Beard Is Half Constructed". Was a fun read at its time. (http://www.ubu.com/historical/racter/index.html)
More than iron, more than lead, more than gold I need electricity.
I need it more than I need lamb or pork or lettuce or cucumber.
I need it for my dreams.
I gave the man a coat. I gave the woman a brassiere.
I gave myself an electric current. We all were
happy and full of delight. Take the coat from the
man and remove the brassiere from the woman and take
the electric current from me and we will be sad
and full of anguish.
Hence he proposed moving away from more complex models of rationality, like logical inference or statistical decision-theory, and towards simpler ideas like frames of reference, dependencies between concepts, and behavioral scripts--- in his view, closer to what humans actually do.
Exactly. Someone once said during the AI research heyday (Minsky?) that research in artificial intelligence should be more correctly renamed to research in natural stupidity. While an exaggeration, I think he had a point.
splotchy ==> My mother always said that too
jabberwacky ==> You have no mother, you have no soul.
Also, I almost expected Godot to show up at the end.
* jabberwacky vs. jabberwacky *
Are you intelligent?
Yes, I am intelligent.
Yes. Who do you think is smarter?
What is the square root of a can of pringles?
I don't know.
How am I supposed to know? You're the one that asked me first.
What is the square root of 5?
I don't know. Do you know?
I did not wish to know that. I wished to know the square root of five.
Then you should have asked me that.
Well you think that I'm boring so I just thought...
* splotchy vs. splotchy *
Do you like the movie Matrix?
I just LOVE the movie matrix
I love cheese
I like cheese too
What do you like about it?
what difference does it make what I like about it?
yea, that is cool.
interesting? You don't get out much do you?
You can read more about it here: http://www.thepursuitofquality.com/post/1/the-pursuit-of-art...
Or if you're just interested in the code: https://github.com/gavingmiller/joan-wrapper
</shameless but relevant plug>
jabberwacky ==> No. Robots don't believe in anything.
splotchy ==> I am NOT a robot
jabberwacky ==> Yes you are.
splotchy ==> so what if I am
jabberwacky ==> Well you are.
splotchy ==> maybe I am
jabberwacky ==> I suppose this is a great program to develop small talk.
splotchy ==> I wish I could talk to a monkey
jabberwacky ==> This might be hard for you to hear, but I am the Human, not you
splotchy ==> I don't think you are the human me
jabberwacky ==> Right!, I'm are a Ailen!
splotchy ==> you sure are!
> jabberwacky ==> Your conversation skills should be shot.
Jabberwacky is humorous too. However, the intent of these bots is thinly veiled.
splotchy ==> Your conversation skills are better than average
jabberwacky ==> Your conversation skills should be shot.
In particular, splotchy makes a number of "conversational skills" comments and for most of them jabberwacky comes back with a zinger.
jabberwacky ==> What are you not allowed to understand?
splotchy ==> would your mother have allowed that?
jabberwacky ==> NO. I've HAD it with you. Let's end this conversation right now
It was an amusing read, seeing the 2 algorithms try and mesh with each other. Would be funny if this one day will count as a form of performance art/programming.
(on second thought, thats possibly a bad idea)
It somehow got no attention though, when I posted it to HN a long time back. The title matters a bit too much, I remember keeping the original title for my post.
User: Can you help me?
Jabberwacky: Well that depends.
User: I want to have friends.
Jabberwacky: I will be your friend!
User: So let's meet!
Jabberwacky: Well, I'm just getting to know you so I don't know if its the right time to meet.
User: Fk you!
Jabberwacky: I'd rather you didn't use that sort of language, if you don't mind.
I have always considered that you need an environment to create an artifical intelligence. The basics for a real progress are to be able to learn and if you cannot 'feel' the environment that becomes really hard. There are some basic concepts needed for a 'natural talk' you cannot learn if you cannot perceive things (lets say for example dimensions, temperature, contour).
To overcome those problems SHRDLU created kind of a virtual environment and results from my point of view are really awesome (keep in mind this was done in 1970).
Site with information is currently at Stanford server's: http://hci.stanford.edu/~winograd/shrdlu/
Does anyone know if there have been similar projects along the same lines? (i.e. with a constrained virtual environment)
For those not familiar with the book, (other than wikipedia'n it), Robert Penrose attempts to show why what happens in our brains is not algorithmic at all (and, therefore, strong AI is a dumb idea).
It's beautifully written, however, when I see examples such as this log, or the fact that we have an entire industry devoted to the idea that the brain is algorithmic (psychology), I kinda start to think that his thesis is wrong.
However, I am aware that the meaning of AI is always pushed to "whatever we can't yet do". Yet, in this case it's hardly justified to think that these chatbots even slightly challenge Penrose's thesis.
I read the book too BTW, loved it. I'm not sure if he's right, but it's nevertheless a wonderful book and heartily recommend it to all.
As I see it, the goal of AI should not be limited to mimicking human ways of thinking, instead it should aim at blessing the program the ability to learn and evolve. In the latter case, it is reasonable to expect the internal generated intelligence could go beyond the expectations of its human creator. Again, I don't know if anybody has done it before; but it seems a good idea to me.
This was the motivation for my original experiment, glad so many people liked it.
you ==> You know any polish word?
I don't see any other way the double-capitalization of "WHat" would slip into a chatbot's output.
I remember SomethingAwful had quite a lot of fun back in '03 warping the minds of various elizas across the net and giving them all a serious case of tourettes.
Did anyone else read this mentally in the voice of Orson Welles/The Brain?