I think we agree. And I think Turing's point in choosing a conversation as the test was to pick something that would indicate the vast range of human experience. If a computer could pass a strong version of the test as you describe, then I would agree that we would have say, as you do, "if it looks like a duck, .... then it is a duck."
But I don't agree that it renders the concept of conciousness meaningless (or at least any more meangingless, depending on what you think about the concept now). On the contrary, I think we might have to say, "this computer is probably concious", and afford it all of the rights that we do for humans.
BTW I don't think either Turing or Chomsky would say that
>Things like free will, love, or consciousness are apparently outside of the natural world
I obviously don't speak for either of them, but I'm pretty sure they both subscribe to the theory of causal determinism and computational theory of mind.
I'm not so sure that just because you can fool me that an object is a duck, that it is a duck. At least when applied to general AI. Nor do I necessarily agree that just because I can have a conversation with an algorithm, that we should give that algorithm all the rights given to humans.
For example, if I say mean things to that algorithm, even one that can fool me into thinking it's another human for decades, is that morally wrong? Even if it's just setting some variable in memory to sad=true? If so, is it morally wrong for me to create a program consisting only of a singular red button, that when pushed, sets the sad flag?
There was another "game" created a couple years ago that featured a fictional human (a little girl, if memory serves) on life support. They programmed it so that unless somebody in the world clicked a button in their browser, the girl would die within 10 seconds and the game would basically delete itself, ending forever. (The response was so strong that people flooded the server, and I believe their hosting provider blocked access to their site, thereby killing the program untimely.) If the little girl was removed, and the goal was to just keep the program running as long as possible on life support, would that be morally wrong?
I ask these questions because I don't think a even a complex script which is doing nothing more than attempting to fool me should be considered worth assigning rights to, even if it's really good at it. To be honest, I'm not entirely sure how to define consciousness in this regard, but I suspect it would require surprising its creators to the point that they cannot fathom how it behaved the way it did. Or maybe I'm confusing consciousness with free will. Either way, if the only thing separating a program that deserves human-level rights from those that don't is some combination of power (enough to scan all the valid responses and deliver the best one) and a wide array of responses linked to conversations, then I'd argue that all programs deserve the same basic human rights.
is it morally wrong for me to create a program consisting only of a singular red button, that when pushed, sets the sad flag?
We could discover tomorrow that there's a "sad ganglia" in the human brain that can be set on or off with an electromagnetic field. Does that mean that humans are ultimately biological machines without rights?
I think it's extremely likely that humans are going to have a very hard time in the near future coming to terms with the fact that we are what are "in" our brains, and that our brains are basically just biological computers whose inputs, outputs, and how it converts the two, are very much a part of the natural universe and can be explained. And once we can explain it, we'll be able to predict it and thereby control it.
I know what you're asking and the point you're trying to make, and I totally agree that it's an interesting problem. Perhaps one method is that non-humans will get rights when they evolve to the point to create their own rights and the means with which to stop other things from trampling on the rights they have assigned themselves.
I'm with you. I might have said it a little differently in that I believe that rights are societal conventions and written agreements that we give each other and protect for one another. Without conscious agreement to the system in which those rights exist, other bases need to be established.
Perhaps, but "feelings" are more complicated than that. Pain isn't just a little flag that gets set to TRUE. At the very least, it's rewiring your neurons to avoid that experience, initiating instincts, and affecting your entire brain.
Definitely, but then again any AI worth a damn isn't going to have a "sad flag" either.
My point was that understanding the nature of an emotion in a trivial way should be orthogonal to how we think about what rights that being should have. At some level, we're all machines. Just because one's software runs in silicon vs gray matter; just because one's hardware was deliberately built and is understandable in computing terms doesn't mean that we really understand what it is to be sentient with respect to rights to be free and exist.
To be honest, I'm not entirely sure how to define consciousness in this regard, but I suspect it would require surprising its creators to the point that they cannot fathom how it behaved the way it did.
It absolutely would require that. If you believe in determinism then part of the definition of intelligence, or an intelligent system, is that it exhibits behaviors that are just too complex for a human consciousness to intuitively follow the causal chain. Incidentally, biological intelligence is built on top of some other systems which, given our current levels of understanding, also meet that criterion themselves, so we're pretty clueless about how it works.
Part of this is a definition problem - the Turing Test was defined vaguely enough that everyone has different conceptions. When a person talks about a "good" or "strong" Turing Test, they are envisioning one that would pass all of their personal standards and all the ways they could think of to trick it. And when they talk about it with someone else, who likely envisions a somewhat different version of the test, there seems a tendency for that person to assume that their version would not be passed, so they start to talk past each other.
In other words, if an AI were to consistently surprise you with thoughtfulness, compassion, creativity, or whatever other constituents of "true intelligence" you assume the duck imposter would lack, would you then confer it those rights?
I think you would, because that's the point: it has thoroughly convinced you that it is "thinking", and you feel like it truly "understands" you - frankly a higher bar than many rights-granted humans would pass.
What separates animals that deserve human-level rights from those that don't? Would you argue that all animals do? If not, I would say that that distinction is no less arbitrary than the one you're drawing at the end.
But I don't agree that it renders the concept of conciousness meaningless (or at least any more meangingless, depending on what you think about the concept now). On the contrary, I think we might have to say, "this computer is probably concious", and afford it all of the rights that we do for humans.
BTW I don't think either Turing or Chomsky would say that >Things like free will, love, or consciousness are apparently outside of the natural world
I obviously don't speak for either of them, but I'm pretty sure they both subscribe to the theory of causal determinism and computational theory of mind.