I would suggest shunning them is the right response.
The University of Reading is, of course, the august institution behind:
Kevin Warwick, on the other hand is a complex character who errs on the side of hype and unjustified self-publicity much too often. He is not taken very seriously in academic AI and robotics, except when he causes a collective face-palm with claims like this.
The press release people at Reading, or pretty much anywhere else, do not have the skills to assess whether their professor is begin daft or not.
I've seen Warwick speak. He's not afraid to throw science away completely in favour of making a dramatic point.
I remember him claiming his flesh "embraced" an implant by growing around it when he described the typical immune rejection response he experienced.
If anything, it's the pixel-peepers and the "I need a bigger lens" that are non-scientific nuts (essentially gear fashion victims, who would buy a $10,000 Leica when a $1000 Nikon can give the same results).
Actually it's not. For one, Leica doesn't make their own sensors. Second, they have a very hard time with their processor software (in camera). The first model they put out a few years ago had awful color rendition and strange casts. And not that great performance in low light either. And I mean compared to the models of the era then. At least, IIRC, they smartly removed the antialias filter, or put a very soft one, which gave them a more sharpness, but that was mostly it. Nothing making the camera (M8) worthy being sold for multi-thousand dollars sans lens.
What made Leica legendary (back in the day) was their mechanical construction capabilities in the year of analogue cameras this mattered. In the era of digital they cannot compete with a behemoth like Canon making their own sensors, processors and firmware to drive them. Or even Nikon.
Of course they still make good lenses. But that's just part of the story -- and they are not that better than comparable in price high ends models from other brands.
So, no, a "sufficiently skilled photographer" wouldn't produce a better picture with the Leica. It's an inferior product in every way except mechanical construction (and re-sale value). Plus it's a rangefinder -- the precise focus capability of a DSLR is much better.
In fact most accomplished photographers today don't shoot Leica's. They did that back in the day for reporting (like Bresson or Capa). Today they use either some Canon high end (D)SLR or some medium format (if they want to signal "professional").
Don't get me wrong: cybernetics is for real (see Norbert Weiner). Just clarifying Warwick's field. He currently publishes stuff on brain-machine interfaces, mostly.
And his definition of a cyborg here is bullshit. Anyone with a heart pacemaker is much closer to a cyborg than someone with a RFID chip in their arm. Hell, they are being kept alive by a machine! The first pacemaker implants date to the late 1950s.
I agree that placing my brain in a robot suit would classify, and probably replacing my arm with a full robotic arm. What about heart replacements, or eyes? Where do you draw the line?
Personally, given the definition of cyborg being "something with both mechanical and organic parts" I think all the above and yes, even placing an RFID chip in your arm, would make you a cyborg whether you accept it or not.
"Professor Kevin Warwick, the world's leading expert in Cybernetics, here he unveils the story of how he became the worlds first Cyborg in a ground breaking set of scientific experiments. "
: having a formal and impressive quality
I think this proves that the Turing Test is more or less crap. Humans, who are easily fooled/socially engineered, can't just decide that "this is AI," like its some kind of American Idol-like contest. There should be some rational metric at work here. A bunch of different tests and human judgement as only one part of the testing suite.
Look at what IBM has been doing with Watson. It may never pass this test, but its probably the closest we have to AI (generalist self-learning system). Maybe this event will be the excuse we finally need to lay Turing's test to bed, permanently.
You really need to live 13 years as a boy to be able to answer as one.
Better? Maybe, maybe not. But after 20 years, to show no real progress at all? Baffling.
I think the easiest fix is to crowd source the Turing test. If you can fool 95% of all people chatting with you over a sample of >100k people you're probably pretty good.
Hold on, you may say, we're talking about chatbots progressively fooling people, not 'real' artificial intelligence. But with better chatting comes better functional intelligence. For my bank's chatbot to converse sufficiently well to provide useful customer service requires it to understand all sorts of language and questions. For a personal avatar to take all your messages or make you fall in love with it (like that recent movie) requires even more capability. Whether it can ever transition into fully sapient consciousness verges on philosophy or, at any rate, is a question we can't answer yet.
He mentions a Professor Martin Smith in the article as a judge too. http://www.mdx.ac.uk/aboutus/staffdirectory/martin-smith.asp... [which page I've just noticed mentions scrapheap challenge].
.. and following that line I note that Professor Kevin Warwick appears to have history with both Llewellyn (promoting Warwick's book in 2002 here, http://www.reddwarf.co.uk/news/2002/11/08/roberts-robots/) and Smith .. which might make this a case of him having gathered a few cronies together to generate some publicity?
Is there a record of the event somewhere that could disavow me of the result simply being a case of partial judges being generous towards a friend?
They'd have to both tell us something useful about the possibly-intelligent agent in question and not disallow anyone who we would consider an intelligent agent (more specifically, there's a whole suite of tests we could hypothetically use that would need to be discarded because they'd rule out entire sets of humanity as non-intelligence. Oops :-p)
>I chatted with the chatbot Eugene Goostman, and was not impressed. Eugene does not keep track of the conversation, repeats himself word for word, and often responds with typical chatbot non sequiturs.
<Josiah, an 8 month old from Nashua, has entered the room>
S: Hi Josiah, I'm Steven. What do you like to do?
J: <no response>
S: Josiah, are you there?
J: <no response for 4 minutes>
J: uhqtuhq a
S: Excuse me?
<Steven has left the room>
Look at this rule: The Computer will be deemed to have passed the “Turing Test Human Determination Test” if the Computer has fooled two or more of the three Human Judges into thinking that it is a human.
Suppose the Computer is absolutely perfect its responses (i.e., it should pass the Turing Test). The judges know that they're speaking to 3 humans and 1 computer, so if the judges are chatting with 4 equally-good subjects, they'll decide that one of the four is a computer on a whim. There's a chance that Kurzweil will lose just by arbitrariness.
It's like being asked to sample 4 glasses of wine to pick the worst. Unbeknownst to you, all 4 glasses have the same wine. Even though they're equally good, you'll reject one glass by some arbitrary measure. Maybe you felt an itch on your neck while drinking from the second glass, so that one is the bad wine.
( * 1) http://www.kurzweilai.net/a-wager-on-the-turing-test-the-rul...
3 judges and 4 participants
What's the probability that any 2 or all 3 judges will pick a particular participant ("the computer") out of 4 at random?
You:> Hi, how are you?
(edit:So that's not completely non sequitur things)
I don't think I've ever seen a chatbot on a web page that can do addition. (unless one counts wolfram|alpha: it does respond to greetings after all.)
Interrogator: "Hello, how old are you?"
Bot: "I'm 2 and a half."
I: What is your name?
I: I live in the Capital of the United States.
I: Because there was a job open and I needed one.
I: Because I need money in order to live.
I: You know, to buy groceries and stuff.
I: What do you mean?
B: I like butterflies!
I: Oh, really?
B: Yeah, do you know butterflies come from cappilars?
I: Yeah, I knew that.
B: Do you like butterflies?
I: I guess so.
I: They look nice I guess.
I: I don't think you're a person, this is shit.
I: Stop saying that.
B: SHIT SHIT SHIT hehehe!
I: Oh God damn it what did I do?
B: SHIT SHIT GOD DAMN SHIT HEHEHE!
I: I gotta go.
B: OK, bye!
B: SHIT! SHIT! SHIT!
At least in my example a 2 year old doesn't care what you're saying. It's able to learn better than the other example, but still not a lot is expected.
I bet there's some cognitive age level that we're able to emulate well enough to pass off as human, at least in terms of verbal communication. I think it would be useful if we were better able to measure that, and raise that level slowly. Maybe we actually can impersonate a 2 year old well, then what about a 3 year old? 4 year old? Where do we get hung up?
If we can't get a 2 year old's cognitive processes down without a doubt, we should build on that first instead of trying to do something more complex without an understanding of how to make the foundations of that intelligence work.
Kurzweil is being too kind.
You can go a long way with the publicity, er, focus.
What this Eugene stuff has made me realize is that we need milestones around the Turing test in the pop-science lexicon, rather than just a pass fail. Kurzweil's right, this bot isn't a pass in anything like the spirit of the test. But, getting attention is a good thing. It encourages potential students and engages the public.
Maybe there could be a few variants of the test based on bot age, bot native language, judge age, judge proficiency, etc. These could be scored these by the percentage of judges fooled.
That way a new bot could break a previous record on one or more variant of the test. PR fodder. Legitimate accolades. The headlines could mean something. Fewer cranky nerds.
Kurzweil could do promote this.
Researchers, hobbyists and porn sellers working on chatbots. The definition of success is the Turing test. To give themselves goals and brag about those goals in media, they enter Turing test contests and try to get a "passing" score relatively to arbitrary definitions of pass.
Some will inevitably get overenthusiastic about the achievements. It will go int the press and that's where overpraising wolf crying comes from.
I propose to fixe this by giving contestants a legitimate target to aim for. Something that naturally produces PR fodder in 256 characters. "New Chatbot achieves 38% on the Eugene-13 Turing Test." That could mean something real and honest.
Some variants of the test are complete BS. Some are not. They may not be an indication of consciousness (that's what the one we've got is for) but beating our previous best does represent a legitimate milestone.
A lot of them could be interesting. Some variants of the test could focus on tasks/scenarios. For example, an AI receptionist. Others (maybe Eugene-13) could be a handicap version to the real Turing test.
"There is a great deal of often heated debate about these matters in the literature of the cognitive sciences, artificial intelligence, and philosophy of mind, but it is hard to see that any serious question has been posed. The question of whether a computer is playing chess, or doing long division, or translating Chinese, is like the question of whether robots can murder or airplanes can fly -- or people; after all, the "flight" of the Olympic long jump champion is only an order of magnitude short of that of the chicken champion (so I'm told). These are questions of decision, not fact; decision as to whether to adopt a certain metaphoric extension of common usage.
There is no answer to the question whether airplanes really fly (though perhaps not space shuttles). Fooling people into mistaking a submarine for a whale doesn't show that submarines really swim; nor does it fail to establish the fact. There is no fact, no meaningful question to be answered, as all agree, in this case. The same is true of computer programs, as Turing took pains to make clear in the 1950 paper that is regularly invoked in these discussions. Here he pointed out that the question whether machines think "may be too meaningless to deserve discussion," being a question of decision, not fact, though he speculated that in 50 years, usage may have "altered so much that one will be able to speak of machines thinking without expecting to be contradicted" -- as in the case of airplanes flying (in English, at least), but not submarines swimming. Such alteration of usage amounts to the replacement of one lexical item by another one with somewhat different properties. There is no empirical question as to whether this is the right or wrong decision.
In this regard, there has been serious regression since the first cognitive revolution, in my opinion. Superficially, reliance on the Turing test is reminiscent of the Cartesian approach to the existence of other minds. But the comparison is misleading. The Cartesian experiments were something like a litmus test for acidity: they sought to determine whether an object has a certain property, in this case, possession of mind, one aspect of the world. But that is not true of the artificial intelligence debate.
Another superficial similarity is the interest in simulation of behavior, again only apparent, I think. As I mentioned earlier, the first cognitive revolution was stimulated by the achievements of automata, much as today, and complex devices were constructed to simulate real objects and their functioning: the digestion of a duck, a flying bird, and so on. But the purpose was not to determine whether machines can digest or fly. Jacques de Vaucanson, the great artificer of the period, was concerned to understand the animate systems he was modeling; he constructed mechanical devices in order to formulate and validate theories of his animate models, not to satisfy some performance criterion."
Why not? Presumably part of the act of fooling would involve 'swimming' submarines.
I think passing a __strong__ Turing test actually does say something about our brains, cognition, and consciousness. And by a 'strong Turing test', I don't mean dinky 5 minute tests with a "Ukrainian boy". Imagine you carry on a 20 year relationship with a computer pen-pal, having in-depth discussions about every-day things from movies, to music, to sports to family and relationships. Imagine such a computer program fooling every human it interacts with for decades at a time, I think that would say something about ourselves and I think it would render the question of consciousness meaningless. If it quacks like a duck, looks like a duck, walks like a duck, and you can't tell the difference between it and a duck, it's a duck.
Another problem is that philosophers and even regular people tend to relish in ambiguity when it comes to certain ideas and concepts, even trying to elevate them to supernatural levels. Things like free will, love, or consciousness are apparently outside of the natural world, and not subject to natural laws. I think that's wrong. I think the answer is much simpler and much more humbling than we are willing to admit.
I don't think Chomsky is trying to elevate certain things outside of the laws of nature. He's describing how what we choose to differentiate changes our fundamental perceptions of those things. Submarines could "swim". They could also "read", but those words have a very specific set of properties associated with them.
Well I don't.
> The fact that we have different words for "think", "compute", and "calculate"
Also my native language (Finnish) doesn't have separate words for "compute" and calculate". Both are covered by "laskea". (Also "suorittaa" is used, but that is "execute".)
What about arvioida? Would that be used for compute?
This might be an instances where the Sapir Whorf hypothesis (linguistic relativity, http://en.wikipedia.org/wiki/Sapir_Whorf) comes in to play. The idea would be that you don't consider computing and calculating to differ from thought because the [primary] language you use [or grew up using] doesn't make that differentiation.
No no. We have laskin ("calculator") for calculator and tietokone ("data machine") for computer.
If you don't know any other language than English, you'll be surprised about how many different ways different languages have come up with for words for modern things, like the computer. The etymology does not necessarily resemble that of the English language.
That is "to estimate".
Actually, I think that just knowing English well one can see the same effect as knowing a few other languages - the etymologies of words being dispersed amongst Latin, Greek, Scandinavian, Germanic and French origins (as a first pass, of course there's influence from many, many languages) makes it easy to see how many words can develop for the same thing, each having a subtle twist of meaning. Like the use in English of beef/cow for cooked meat vs. the animal.
The only words I know for a computer are computer (English derived from a name for a person who calculates values), ordinateur (French, origin is Latin to do with organising/ordering; close to English "ordinator"), cyfrifadur (sp? Welsh, origin is account-er; similar derivation as English), and rechner (German, a cognate with English "reckoner") ... but in these cases I think everyone normally uses just "computer" or a transliteration of it like in Kswahili ("kompyuta", don't quote me on that spelling!).
So form of "computer" seems to be the choice in a large number of languages, though.
Google Translate shows multiple, but these three seem most applicable:
ajatella: propose, consider, think about, weigh, cogitate, think
miettiä: think about, think, consider, reflect, contemplate, ponder
luulla: think, believe, suppose, imagine, expect, suspect
First, that there is an implicit assumption about how cognition and consciousness are defined, but there is no real definition.
Following the example, it's not defined what a duck is, and "philosophers" actually don't try to elevate the question - they try to find answers to it.
While a "strong" test for example would look appealing, the problem is that there is no real model of weakness/strenght, and while a human would pass all the possible models of humanity, a computer would surely pass only the ones it's programmed for.
Another thing is that in order for a computer to truly mimic a human, that is, to fool people for 20 years about movies/music/sports/family/relationships, but also other abstract experiences, the computer would need to experience them, and especially, elaborate them in a human way.
Which is again, very "open", and it's the core problem.
Even excluding the openness problem, the Turing test in the way it's posed, looks to me as the photorealism problem. You can achieve it in a static picture, but once you freely move in a 3D world, you see the flaws that show that what you're experiencing in reality is a limited set of limited algorithms, which are used to workaround hard problems (workaround is the key).
I think exactly the same arguments stand, that is, in order to mimic a human in such a high level of faithfulness, very hard problems would need to be solved, not just worked around.
And yet volumes are written on the topic, and you have no trouble dismissing this particular approach as not really answering "consciousness". Well what do you mean exactly then?
>and while a human would pass all the possible models of humanity, a computer would surely pass only the ones it's programmed for.
I don't agree with either the first nor the second assumption a priori. Would a schizophrenic, or autistic or infant brain pass "all the possible models of humanity". And second, why would a computer pass only the models it was programmed for?! We know that isn't true with today's software and today's AI. IBM's Watson wasn't programmed with every single fact it used to win Jeopardy, nor is any rudimentary video game AI programmed with a specific set of behaviors that it executes regardless of player actions. If you want to go deeper, I can also claim the human brain itself was shaped by natural selection for a finite and very specific set of tasks, and the neural machinery is as deterministic as software since both are subject to the fundamental laws. So tell me why can't software simulate human behaviour again?
>Another thing is that in order for a computer to truly mimic a human, that is, to fool people for 20 years about movies/music/sports/family/relationships, but also other abstract experiences, the computer would need to experience them
No necessarily. It can lie. Or it can live them by ingesting huge amounts of information from digital content, or maybe it was trained in a lab or adopted by a family and raised like a human. Whatever.
> and especially, elaborate them in a human way.
Define "human way", because that's the entire point. I don't see intrinsically why such a computer program could not be built. What's so special about the "human way".
>You can achieve it in a static picture, but once you freely move in a 3D world, you see the flaws that show that what you're experiencing in reality is a limited set of limited algorithms, which are used to workaround hard problems
And you base this on what? I mean you're just asserting it can't be done, why? Because human brains are powered by magic!?
>I think exactly the same arguments stand, that is, in order to mimic a human in such a high level of faithfulness, very hard problems would need to be solved, not just worked around.
I'm not sure what the difference is between solving problems and merely working around them, in your context. But yes, we're not there yet, obviously. I think the Turing test is actually deeper than most people give it credit to be. And you illustrate this perfectly. All your objections are hand-wavy appeals to some vague notions of the "human way".
Maybe we're operating with different notions of "the question of consciousness", but I have to disagree that a perfect simulation of a human mind would dissolve the problem. And by "the problem" I mean the Hard Problem of Consciousness, the issue of how the hell the subjective experience of being arises in a lump of bloody meat (or in anything, really).
I suppose my question to you is this: do you believe there is something it is like to BE that simulation, to have a first-person subjective experience from the perspective of the computer?
I agree with you that elevating certain aspects of reality to "supernatural" status is unhelpful; it's the equivalent of saying "sorry scientists, but this stuff is out of bounds." (I'm also not convinced that dividing reality into natural and supernatural is even a coherent distinction, but that's another discussion.) However, I have to take issue with your suggestion that the answer to consciousness is "much simpler... than we are willing to admit." It may turn out that a future science will come up with a very satisfying answer to the Hard Problem of Consciousness, but I suspect that this will require a paradigm shift so radical that it will make current neuroscience look like phlogiston theory.
Because human brains are magic?
> And by "the problem" I mean the Hard Problem of Consciousness, the issue of how the hell the subjective experience of being arises in a lump of bloody meat (or in anything, really).
You have your answer, the effect of consciousness clearly arises from physical matter that is subject to natural laws of physics and shaped blindly by natural selection. That should tell you that it is quite possible. For example, for a related concept of Free Will, I think we can confidently say that for all intents and purposes we don't have any, and yet our brains produce a very powerful subjective feeling of having it. This is why I think Consciousness is a much simpler problem than it really is. We have a cognitive bias for seeing the world in a very specific way, just as we have a cognitive bias in visualizing 3-dimensional spaces and we have incredible problems visualizing higher dimensions (heck, we can't even visualize a 2-D or 1-D space without embedding it in a 3-D space).
Consciousness and free will are very powerful illusions hard-wired into our brains, unless you think the particles that make us up, somehow don't follow deterministic (or random in case of QM) natural laws?
See, this is the problem with this virulent strain of Scientism making the rounds right now. There's this tendency to divide ideas into two camps: "explained by our current understanding of physics" and "magic". It seems like a knee-jerk reaction to religion, like anyone that posits that we don't have a complete understanding of reality is therefore a crypto-theist. It completely misunderstands the nature of scientific progress, the evolution of our conceptions of reality in response to empirical evidence.
It's easy to picture human progress as millennia of trudging through continually-decreasing ignorance and finally arriving at the Correct Answers, but this can be a dangerous view. Science works best when we aren't constrained by dogma -- when we're allowed to "think outside the box" and consider the world in new and revolutionary ways.
People in the thrall of this Scientism often act like raising the Hard Problem of Consciousness is some kind of intellectual weakness, that we're just not able to "get over" the fact that subjective experience happens to emerge from physical matter. Nobody here (or at least not me) is saying that the Hard Problem means that subjective experience is "magic" or that we have anthropomorphic souls that float up to heaven with angel wings when we die. The Hard Problem of Consciousness is still a scientific problem, and I'm not saying it can't be solved by science. But telling me that Strong AI will basically solve this issue (and that any protestation is an appeal to "magic") comes across as hand-waving; the simulation of a conscious entity does not help me understand why being in the world feels like anything.
If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption in the question (that identical behavior implies identical properties). If no, then, again, you're contradicting the assumption in the question. You think consciousness is not determined by behavior. Ok, then what determines whether or not something is conscious?
Sure, I'll admit the possibility exists.
> If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption given in the question (that identical behavior implies identical properties).
I must've missed this... who was saying that identical behavior implies identical properties? Sure, I'd probably disagree that the A.I. is conscious in a subjective sense, and I'd also probably disagree that identical behavior implies identical properties. People can imitate each other without taking on the properties of the imitated.
I don't really have an empirical "test" for subjective consciousness beyond my own immediate, first-person experience of it. This may sound like a concession or even a defeat, but I think I'm allowed to posit that phenomena exist which we currently lack the empirical tools to investigate. "Currently" is the key word; as I said before, it is arrogant to assume consciousness will forever remain a mystery to scientific inquiry, just as it is arrogant to assume it must be a simple extension of existing theories.
I admit I have nothing beyond my own experience to validate the idea of subjective perception, and I have no evidence beyond intuition as to whether or not a machine can "experience" input the same way a brain can. However, I think I'm still entitled to believe that subjective experience is a real phenomena whose nature can and should be explained, and that our scientific understanding is presently inadequate for this task.
EDIT: I can understand the fear of relying on intuition. After all, it's the same thing that led us to believe that lightning came from the gods. But that doesn't mean that we should throw out the entire experience of perceiving lightning. Clearly lightning is a phenomena we experience, but we still don't understand how photons entering our eyes produce the subjective experience of blinding whiteness, or how vibrations from thunder translated into electrical signals by the ear result in the subjective experience of the sound itself. The information is in the brain, but we still don't know how information becomes experience. This doesn't mean we have to explain it via gods, but it does mean we still have something left to explain.
We have a very good understanding of the fundamental forces and particles that govern the brain and our everyday experience. That doesn't mean that we'll use the vocabulary and mathematics of fundamental physics to explain brain processes, just as we don't use particle physics vocabulary when we model Hurricanes or explain cell processes. Nevertheless, whatever model or explanation you come up with for Consciousness better square with those fundamental physics otherwise you're going to be in the crackpot territory. That's not Scientism, that's just a fact.
>There's this tendency to divide ideas into two camps: "explained by our current understanding of physics" and "magic".
Again, Quantum Mechanics, and the Standard Model (as well as the laws of Chemistry that abstract those) are not going away. Evolution and Natural Selection is not going away either. That constrains the kinds of explanations we will have for Consciousness. If you think understanding Consciousness will overturn either the Standard Model or Evolution, you're going to be very disappointed. Again, that's not Scientism, that's just a smart prediction.
>But telling me that Strong AI will basically solve this issue (and that any protestation is an appeal to "magic") comes across as hand-waving
I didn't say it will solve Conciousness. I think Conciousness is an ill-defined concept, but yet many people have very strong feelings about. I speculated we'd probably see it as such when (if) we are capable of building such a strong AI, probably before that.
Anyway, this whole thing is very controversial (they don't call the Hard Problem "hard" for no reason), and I can see the appeal of trying to safeguard the scientific process of knowledge-building from the messy weirdness of subjective experience. Time will tell if consciousness can be explained by a more advanced physics. I'm certainly looking forward to it.
Should that understanding have come before or after being accused of 'Scientism', which feels like an insult, but I can't be sure because I don't really know what that means in context.
>The person you're debating doesn't think evolution is going away, not even a little bit, not even for a second.
I didn't imply that he did. The point I was trying to get across is that there are some real constraints on the type of explanations we'll have with respect to Consciousness. We are not going to need unknown exotic physics to explain it, and whatever the answer is it will stay comfortably within the current Evolutionary framework. Obviously that could be wrong, but I wouldn't bet on it. This should not be a controversial statement.
You actually can't prove _anybody else in the world_ besides you has a first person subjective experience. Trying to prove a magic talking box has experience when you can't prove another moist robot has experience is moving the goalpost too far out.
require a paradigm shift so radical
Not really, we just need more high speed training data. It'll all be available to way too many people, governments, and companies within the next few years.
That's absolutely true. My wife and I sometimes joke that each of us is an incorporeal figment in the other's dream. It's impossible for me to know with certainty that I am not the only real consciousness in existence. It's an interesting line of thought, but ultimately unproductive; what could I do if it's true? If it's not, then the validity of all of the other consciousnesses is just as pressing as mine. Either way it behooves me to behave as though it is true, so I suppose that's the starting point for all of my thoughts on this topic: I and all other humans have a first person subjective experience. I agree with (parent? gp?) that describing the nature of that experience is non-trivial.
Maybe I'm just confused about what goals we're talking about. If your goal is to understand cognition, then sure; a highly intelligent machine is a great way to do that. However, the comment I replied to was suggesting that strong AI would "render the question of consciousness meaningless"; that's a far stronger claim, and in my opinion, unrealistic. I think you and I are actually in agreement on this one... if anything, I was arguing against A.I. having the goalpost of understanding subjective experience.
That's actually a great point many people misunderstand, largely due to nobody pinning down a definition for "Strong AI."
I prefer the term "computational consciousness" instead of Strong AI. It gets the point across better about future AI actually _thinking_ and _experiencing_ instead of having people misconceive AI as just a clever if/else decision tree.
Strong/Hard AI (and human consciousness) is a combinations of algorithms and data. People have hard-wired algorithms for processing data from their senses. Some people have better hard-wired algorithms than others. But, if you take the smartest person alive today back in time and raise them in an isolated environment (i.e. limit their data intake), they won't be the same person and they won't be able to think the same thoughts.
Summary: AI = mostly data, with algorithms to help organize/cluster/recall things. You can't have recall and intent of agency without a self-directed consciousness controlling the internal state<->external world feedback gradient.
But I don't agree that it renders the concept of conciousness meaningless (or at least any more meangingless, depending on what you think about the concept now). On the contrary, I think we might have to say, "this computer is probably concious", and afford it all of the rights that we do for humans.
BTW I don't think either Turing or Chomsky would say that
>Things like free will, love, or consciousness are apparently outside of the natural world
I obviously don't speak for either of them, but I'm pretty sure they both subscribe to the theory of causal determinism and computational theory of mind.
For example, if I say mean things to that algorithm, even one that can fool me into thinking it's another human for decades, is that morally wrong? Even if it's just setting some variable in memory to sad=true? If so, is it morally wrong for me to create a program consisting only of a singular red button, that when pushed, sets the sad flag?
There was another "game" created a couple years ago that featured a fictional human (a little girl, if memory serves) on life support. They programmed it so that unless somebody in the world clicked a button in their browser, the girl would die within 10 seconds and the game would basically delete itself, ending forever. (The response was so strong that people flooded the server, and I believe their hosting provider blocked access to their site, thereby killing the program untimely.) If the little girl was removed, and the goal was to just keep the program running as long as possible on life support, would that be morally wrong?
I ask these questions because I don't think a even a complex script which is doing nothing more than attempting to fool me should be considered worth assigning rights to, even if it's really good at it. To be honest, I'm not entirely sure how to define consciousness in this regard, but I suspect it would require surprising its creators to the point that they cannot fathom how it behaved the way it did. Or maybe I'm confusing consciousness with free will. Either way, if the only thing separating a program that deserves human-level rights from those that don't is some combination of power (enough to scan all the valid responses and deliver the best one) and a wide array of responses linked to conversations, then I'd argue that all programs deserve the same basic human rights.
We could discover tomorrow that there's a "sad ganglia" in the human brain that can be set on or off with an electromagnetic field. Does that mean that humans are ultimately biological machines without rights?
I know what you're asking and the point you're trying to make, and I totally agree that it's an interesting problem. Perhaps one method is that non-humans will get rights when they evolve to the point to create their own rights and the means with which to stop other things from trampling on the rights they have assigned themselves.
My point was that understanding the nature of an emotion in a trivial way should be orthogonal to how we think about what rights that being should have. At some level, we're all machines. Just because one's software runs in silicon vs gray matter; just because one's hardware was deliberately built and is understandable in computing terms doesn't mean that we really understand what it is to be sentient with respect to rights to be free and exist.
It absolutely would require that. If you believe in determinism then part of the definition of intelligence, or an intelligent system, is that it exhibits behaviors that are just too complex for a human consciousness to intuitively follow the causal chain. Incidentally, biological intelligence is built on top of some other systems which, given our current levels of understanding, also meet that criterion themselves, so we're pretty clueless about how it works.
Part of this is a definition problem - the Turing Test was defined vaguely enough that everyone has different conceptions. When a person talks about a "good" or "strong" Turing Test, they are envisioning one that would pass all of their personal standards and all the ways they could think of to trick it. And when they talk about it with someone else, who likely envisions a somewhat different version of the test, there seems a tendency for that person to assume that their version would not be passed, so they start to talk past each other.
In other words, if an AI were to consistently surprise you with thoughtfulness, compassion, creativity, or whatever other constituents of "true intelligence" you assume the duck imposter would lack, would you then confer it those rights?
I think you would, because that's the point: it has thoroughly convinced you that it is "thinking", and you feel like it truly "understands" you - frankly a higher bar than many rights-granted humans would pass.
What separates animals that deserve human-level rights from those that don't? Would you argue that all animals do? If not, I would say that that distinction is no less arbitrary than the one you're drawing at the end.
That requirement by itself is probably the easiest of th requirements.
If, later, you discover evidence contrary to it having been a duck, can you revise your belief? Or was your having been convinced of its duck-ness sufficient to make it a duck?
Journalists/ Interested parties ask philosopher about the mind/body problem, the matter of truth, god.... and if 9 out 10 answer that there is no real problem just misconceptions then the journalist will go to the 10th one who will get air time.
Just because you can't tell that something is false, or perhaps just can't elucidate the reason, doesn't make it true.
The problem is that security competitions usually involve a carefully constructed definition of "hacking" and what makes a system "secure." Many people become so engrossed in the terminology that they forget to think about security from a practical standpoint.
"In the wild," as many people say, the lexical questions are largely irrelevant and the practical ones are king. It's hard to predict exactly how someone will conceive of and interact with a system. Trying to define those interactions with restrictive terms and metaphors tends to be counterproductive.
Admittedly, some of Chomsky's writing goes over my head at times, but that's what I take away from this small snippet.
If this was a sci-fi short, the twist would be "Ray Kurweil" admitting at the end of these negotiations that he was a bot that had borrowed the futurist's email address.
Actual quote from EG
1. regex the input and spit back canned responses:
Question: I live in the |||<<capital of the United States>>|||. Do you know where that is?
Eugene: Every nerd knows that the |||<<capital of The United States is Washington>>|||.
2. if you get confused spit back out something from the chat history
Question I live in the capital of the country that put a man on the moon.
Eugene:: Well, if you want it, the |||<<capital of The United States is Washington>>||.
3. if you haven't been able to find a regex pattern match in a while try to derail the conversation.
Question: How old were you when you started to wear glasses?
Eugene: No I was not! How can you think so?! If I’m not mistaken – |||<<you still didn’t tell me where you live>>||. OR it’s a secret?:-)
In fact this bot is pretty bad. You can tell from the output that it couldn't pattern match virtually any of the inputs and responses 3,7-10 are an attempt to change the topic of the conversation. Responses 4,5 are pattern match misses that are regurgitating from the chat history. Only 1,2,4 are response matches from the parsing engine.
"Eugene", I can imagine creating when I was 12.
Perhaps next year the test will be won by Ham Hamuelson, an AI persona who managed to convince the judges that he was a talking ham sandwich. I guess we can't completely rule that out.
What we need is more of a Turing "score". The design would be a website (say) where each human participant will be presented with both "defend your humanity" challenges (as a subject) or "judge other's humanity". For the former, you'll try to convince an interviewer that you are human; for the latter, you'll be presented with two subjects, and asked to identify which is human (possibly with a "both human" or "both robots" option).
Based on this, individuals will get an ELO score (like chess) on how often they "win" the contest as a subject, that is, are identified as human (or as "more likely to be human than their opponent").
Computer programs will participate as subjects; the requirement will be that the behavior is deterministic given a random seed presented (to them only) at the beginning of the conversation,is to prevent cheating and allow reproducibility.
On an orthogonal level, participants acting as judges could be scored on how often they are correct in making the identification; and there's no reason that computer programs could not compete on this side as well. And this could even feed back into the score for humanity; you get more credit for fooling a good judge than fooling a bad judge, etc.
Maybe one-day there'll be a test where we have to convince a super intelligent sentient machine that we can be more than just human...
Turing never even mentioned the criteria that participants should be aware that they are possibly talking to a computer. This single criteria should not matter for intelligence IMO, because the whole point of the test was to abstract away the appearance. The other way around: a chatbot doesn't pass the test if a participant mistakes a human for a machine.
People fail the Turing Test all the time when unaware of chat scripts. Even some upvoted Hackernews comments may have been artificially generated without being detected as such.
The best way for me to detect if something is up, is to ask the chatbot about Alan Turing, 42 and the Turing Test. Then to curse at it. Most chatbot makers can't resist adding lines specifically for these questions, or they show feigned annoyance that is easy to pick up on. I got Goostman to admit that he was a Turing Test and then we talked about bots some more. Eugene ended with:
I call all these chatter-bots "chatter-nuts" due to their extremely high intelligence. I hope you recognize irony.
Full conversation here: http://pastebin.com/Wf4uiCRf
What? Here's what he said. It seems pretty clear that this version of the game involves one human interrogator, one human and one computer, and the interrogator has to decide which is human and which isn't.
> The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:
> C: Will X please tell me the length of his or her hair?
> Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
> "My hair is shingled, and the longest strands are about nine inches long."
> In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.
> We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
From your quotes it only says that player A is to be replaced with a machine, not that player C is to be made aware of this replacement. I do agree that that is the popular interpretation, but it isn't written down, leading to this ambiguity.
>the interrogator has to decide which is human and which isn't.
The interrogator has to decide which is female and which is male. Replace the female or male with a computer, not making the interrogator aware of this and see if the interrogator is wrong as often as before.
Kurzweil questions this bot, as if it was a bot. He is aware that he is talking to a bot, and doesn't even have to choose between A or B.
Kurzweil's line of questioning is not fair or normal or productive communication even if it was between two humans. A 13- year-old boy would have told Kurzweil to bugger off when tasked to answer 2+2 or wouldn't have responded at all.
Repeat questions like that in a random chatroom and people will think that you are the chatbot.
That seems to make it a very good way to distinguish humans from bots even if it isn't "fair or normal or productive communication" -- if humans could detect that a line of questioning isn't "fair or normal or productive communication" and cut it off, and the chatbot can't, that would seem to be a manner in which a chatbot is readily distinguishable from a human by way of interaction.
And, I would argue, recognizing abusive, unproductive lines of inquiry and either diverting them early or cutting them off completely is an important part of human communication.
An intelligent bot would indeed recognize unproductive lines of inquiry and should act accordingly. It would be more realistic if a bot answered that it has gotten tired of answering stupid questions.
Yet: If the Turing Test was made to avoid discrimination based on appearance, then applying the Turing Test while starting out with a tricky biased (unfair, unnatural, unproductive) line of questioning in the hopes of tripping up the machine... this doesn't seem in the spirit of the Test. The chatbots are at least polite enough to try to answer. Who not extend them the same courtesy as you would other humans, at least till you "figure" out that it may be a bot?
Again: It doesn't say in the paper that the interrogator is to be made aware that player A is now a machine. The interrogator will continue its line of questioning trying to distinguish between male and female. This line of questioning should be far saner, and easier for bots to interpret and play along with. By starting out like Kurzweil he places the Test on the shoulders of the bot: The bot has to find out if it is talking to a sane human or someone spouting gibberish. Kurzweil becomes Eugene's Turing Test.
> From your quotes it only says that player A is to be replaced with a machine, not that player C is to be made aware of this replacement. I do agree that that is the popular interpretation, but it isn't written down, leading to this ambiguity.
This is Turing's own description of the test. It is clear that the intention is for the human interegator to know tha they are talking to one human and one computer, and that they need to identify the computer. Any other interpretation of this description of the test is bizarre.
In the article Kurzweil even says as much: Turing was carefully imprecise in setting the rules for his test, and significant literature has been devoted to the subtleties of establishing the exact procedures for determining how to assess when the Turing test has been passed.
It is not made clear in the paper that the intention is for the human interrogator to be made aware of possible replacement. I myself know this is not the most popular interpretation, but it still makes sense as a Test, you just check if player C is wrong as often as before. Some research has shown that unaware people change their attitude and questions, while other research has shown that for some judges this doesn't matter. Early Turing bots were tested under the interpretation that judges can be unaware. For sourcing on this see: http://en.wikipedia.org/wiki/Turing_test#Should_the_interrog...
I think one of the problems with the Test is that it was written by a mathematician and logician, while it touches on fundamental philosophical problems of consciousness and other minds. It may be valid in these pure domains of information theory, while it may be problematic in philosophy of mind. It is not sure where one ends and the other begins. Philosophers hone in on the ambiguities and omissions. They ask: "Why would Turing omit this rule, while carefully stating other rules? Is this ommission significant? Does the outcome of the test change when we make people unaware?" That is not bizarre, as much as it petty.
Another problem I see with the Test, and this may be controversial, but: Turing creates a test for intelligence and as the basis he takes the distinction between male and female. Then he replaces the male with a machine and asks us: if we lose our gender guessing game as often as before then machines can think. This assumes that Turing thought that he could distinguish males from females by questioning their intellect (as for small-talk both players could easily lie about that). Which at the time of the paper may have been realistic, but in modern times is not the case: If you have an adversarial male and a female who try to trick you, I don't think I could make a distinction at all: My guess would be as good as random.
Turing then suggests using this game, but with a computer instead of a woman.
It is a perverse interpretation of the paper to suggest that the human interrogator does not know that they are talking to one human and one computer.
To reach that conclusion you have to mangle the meaning of normal every day English words.
The facts: Turing did not state in his paper that the human interrogator is to be made aware of the replacement.
The interpretations: Some more perverse than others :)
You don't need to mangle the meaning of normal every day English words, though philosophers like to. It's remarkable that modern Turing tests are not carried out exactly as described in the paper, yet people lay claim to their interpretations and versions as being better somehow.
See http://crl.ucsd.edu/~saygin/papers/saygin-jop.pdf for two views that support my point:
- For communication to be meaningful, communicators should act rational. Be relevant, avoid obscurity, needless repetition, social faux pas and ambiguity. Following Paul Grice's principles you get more normal and effective communication. This is significantly different from trying to trick a machine using obtuse, ambiguous, repetitious, weird communication. Remember: the original test was for player A and B to trick player C. Kurzweil's test is for player C to trick player A into revealing it is a bot.
- They created an entire chapter on bias (prior knowledge that the person was possibly talking to a machine). This shows that it is not a marginal view, but actually a view that makes a difference and has (philosophical) consequences. Subjects do not report thoughts that "this may be a computer", but they think: Person A is mentally ill or handicapped, on drugs, a child or very confused.
To conclude this discussion from my part: I think the modern Turing Tests as inspired by Loebner are fine. However they are not true to the paper in multiple ways, and they assume rules/criteria which Turing omitted. As for validity and philosophical importance of adding this criteria, the onus is on those that add it to prove its worth. If this is a pragmatic criteria to test machine intelligence, then just admit to it. Don't take the original paper and say that Turing omitted something, and that you should interpret and fill in the blanks in a certain way, else you are being perverse. As an aside: I muse about the inspiration for the test. I think it may have come from Turing playing 2-ply chess on a computer terminal. If unbeknownst to Turing a Grand Master would start relaying the moves mid-game, would Turing have noticed, and would Turing have noticed it in the near future? Though computers beat GM's nowadays, GM's still have correct suspicions when playing against an opponent using computer aid: The lines are too perfect, alien or far-fetched. It's interesting that even though artificial intelligence is already better at natural language processing and games of chess, it still does not suffice as human enough for some of us.
NO. This conversation is very frustrating. Stop using other different papers. Re-read the Turing paper.
It is very clear that the aim is for the interrogator to discover the woman/computer, and that the male/human can cooperate with the interrogator.
Turing made it very clear that you repeat the game but that you replace "woman" with "computer". It is impossible to do this without telling the interrergator that they are playing against a computer.
You keep saying that Turing did not explicitly say that the human should be made aware of the computer. But he did say that - by repeating the game but substituting "computer" for "female" you inform the interrogator the same way in both games. In the first tame you say "spot the woman" and in the second game you say "spot the computer". Unless you're saying that you don't tell the interrogator that one of the talkers is a woman.
Here, again, is the quote:
Your quoted error above makes me think that you have mot recently read Turing's paper and so I won't waste any more time discussing it.
He wrote that back in 1992, and I think it's still very relevant now.
Limits to Imagination
I think we should have much greater ambition than to make a computer
behave like an intelligent butler or other human agent. Computer
supported cooperative work (CSCW), hypertext/hypermedia, multi-media,
information visualization, and virtual realities are powerful
technologies that enable human users to accomplish tasks that no human
has ever done. If we describe computers in human terms then we
run the risk of limiting our ambition and creativity in the design
of future computer capabilities.
> Professor Warwick claims that the test was “unrestricted.” However, having the chatbot claim to be a 13-year-old child, and one for whom English is not a first language, is effectively a restriction.
Kurzweil seems to say that the bot lying is a restriction, but the Kapor-Kurzweil Turing Test Session rules explicitly allow the bot to lie about who they are:
> Neither the Turing Test Human Foils nor the Computer are required to tell the truth about their histories or other matters. All of the candidates are allowed to respond with fictional histories.
I suppose he's just addressing Professor Warwick's claim. Nevertheless this point doesn't seem to make any difference to what Kurzweil would consider a passing bot and the casual reader is baited into saying "The bot failed because it lied it's history."
I think he means that placing it in a limited domain of humanity, specifically one that would be expected to make lots of basic errors, is a restriction, since it makes things much easier on the bot.
What really gets me is that, on top of that, they arbitrarily declared that success was a 30% pass rate instead of 50%. The rest of it is bad enough, but how on earth did anyone think that was acceptable?
In fact if you read it, the whole claim that you can call more than 30% of the judges getting a test wrong one time 'passing the Turing test' is rather contrary to what Turing actually was saying.
Me: que pasa?
If the bot/person on the other end tried to respond exactly 4 times that would be a very strong indication that something's amiss. And most likely they would trip up on the slang term at the end.
It also seems to have problems with slang.