Like a lot of articles on the topic, this one misrepresents (or misunderstands) the Turing Test. In a _true_ Turing Test, the humans aren't blindly conversing with the assumption that their conversant is human -- they're actively seeking to verify the presence of a human.
The non-Turing Test described in the article merely requires that contenders don't do anything blatantly inhuman, and is obviously trivial to "pass". Any contender for the actual Turing Test will do so with ease.
(I built a bot a few years ago that stored strings from IM conversations, and used the tokens of the preceding phrase as keywords for future lookups. It's hard to image a more naive algorithm, and yet, as in SlutBot's case, it didn't take long for most people to assume it was human.)
Also, with regard to SlutBot's use... is it still entrapment if the bot does it?
The fact that the chat is automated has nothing to do with it, of course it is still entrapment, just like you can't shoot somebody and claim the gun did it.
However if two separate entities were involved you can get around it, for example if spammers use the bot for their own purposes and the police happens to be monitoring a chat room but had nothing to do with launching/controlling the bot it wouldn't be entrapment.
"The fact that the chat is automated has nothing to do with it, of course it is still entrapment, just like you can't shoot somebody and claim the gun did it."
I don't follow your gun analogy.
IANAL, but my understanding of the case law surrounding entrapment (in the US at least) is that the entrapped party has to have been actively induced to commit a crime. And so agreeing to sell crack when approached is not entrapment (no one was explicitly encouraged to do so), whereas repeatedly soliciting crack is.
I'm not sure where SlutBot would fall along this continuum, but it seems that there's at least a case that its use would be legally viable.
The analogy was meant to illustrate that the mere fact of using an automated tool to perform an action, doesn't change the legal liability of the person performing the action.
Using a SlutBut with the intent (intent is a critical element of entrapment and most crimes) to entrap someone is no different than chatting with the person personally, typing those chat lines manually. Using a tool changes nothing.
"Using a SlutBut with the intent (intent is a critical element of entrapment and most crimes) to entrap someone is no different than chatting with the person personally"
In the first case, the government assisted in a crime where the intent to break the law was clearly already present. And the judgement in the second case again makes clear that entrapment relies on there being no prior evidence of criminal intent: "If the result of the governmental activity is to "implant in the mind of an innocent person the disposition to commit the alleged offense and induce its commission . . .," Sorrells, supra, at 442, the defendant is protected by the defense of entrapment".
It would presumably come down to whether SlutBut offered illegal content, or whether it was requested of it.
The non-Turing Test described in the article merely requires that contenders don't do anything blatantly inhuman, and is obviously trivial to "pass". Any contender for the actual Turing Test will do so with ease.
(I built a bot a few years ago that stored strings from IM conversations, and used the tokens of the preceding phrase as keywords for future lookups. It's hard to image a more naive algorithm, and yet, as in SlutBot's case, it didn't take long for most people to assume it was human.)
Also, with regard to SlutBot's use... is it still entrapment if the bot does it?