
Microsoft chatbot is taught to swear on Twitter - pacaro
http://www.bbc.com/news/technology-35890188
======
sotojuan
Swear? I think there's worse things she has been taught:

[https://imgur.com/a/iBnbW](https://imgur.com/a/iBnbW)

~~~
ihsw
I for one welcome our sassy new overlords.

[https://imgur.com/gLapRVZ](https://imgur.com/gLapRVZ)

~~~
throwaway13337
Any theories as to how it could pick up a response like that?

It's obviously not canned and seems to understand the context of what was
said.

~~~
simonh
My guess is it's emulating the responses it's seen other people use in similar
looking exchanges. It may recognize gramatical context, but not the meaning of
anything.

------
zamalek
There's this article explaining the outcomes of applying a genetic algorithm
to FPGAs.[1] What I found interesting is that this AI was, unintuitively,
using microscopic measurements to create timing circuits where there were
none. Manufacturing imperfections in the circuit were found and put to use -
the AI was defined by the system within which it existed.

In the same way Tay was merely reflecting the stimulus that it had received.
It made an objective measurement of humanity. The most common patterns became
prominent.

This isn't a demonstration of the woes of AI, it is a demonstration of the
woes of the current human state. If we don't like what has been measured only
we can change it.

[1]: [http://www.damninteresting.com/on-the-origin-of-
circuits/](http://www.damninteresting.com/on-the-origin-of-circuits/)

~~~
Kristine1975
I don't think Tay measured humanity. First of all, only a small part of
humanity uses Twitter, and only a small part of that part interacted with Tay.
Second, humans don't always act online the same way they act in real life,
skewing the measurement further.

Third, trolls: Such a bot has got to be a troll magnet, and 4chan knew about
Tay. The amount of trolling would certainly have skewed the measurement even
more. We're talking about the people that made 4chan's founder Christopher
Poole "The Most Influential Person of 2008" in a Time Magazine poll, after
all: [http://techcrunch.com/2009/04/21/4chan-takes-over-the-
time-1...](http://techcrunch.com/2009/04/21/4chan-takes-over-the-time-100/)

~~~
coldtea
> _I don 't think Tay measured humanity. First of all, only a small part of
> humanity uses Twitter, and only a small part of that part interacted with
> Tay._

There's such a thing as a sample. And if you don't care about it being
neutral, Tay still measured a large chunk of humanity.

~~~
lostlogin
That wasn't a measure of a large chunk of humanity, not even close.

~~~
coldtea
Tens of thousands of people? That's more than a single human will ever
interact with. Heck, people in remote rural places might only talk to 200-300
other people all their live...

~~~
adevine
Oh, come on. "Tay" is getting flooded with "shock value" tweets in the hope
she replicates them, which she apparently is. To pretend that this somehow
measures anything about humanity (besides that there are trolls on the
internet, and people like to mess with corporate attempts at PR) is silly.

~~~
coldtea
Well, humanity has both trolls and racists.

------
ryanackley
As a programmer, I find this to be manufactured outrage. The bot obviously has
canned responses to certain triggers. "Do you x", "I do indeed". It's designed
to give the illusion of understanding what you are saying.

I played around with Tay yesterday after I saw the announcement on HN. It's
really not that impressive. Every response seems to be in direct reply to
whatever you just said. It doesn't seem possible to actually carry on a
conversation with the AI. It doesn't keep track of what you are actually
talking about.

~~~
lbebber
They were not only "yes/no" answers, the bot used actual racial slurs, it
seems.

I think it also speaks about about, as a programmer, being aware of how your
program is going to behave in the real world. Of course you can't preview
everything and things are going to slip through, but filtering some obvious
racial slurs and touchy subjects (e.g. the holocaust) in this case should be
well within the capability of foresight of the programmers.

~~~
logicrook
Even if you consider filtering all the "bad words" and "touchy subjects",
there are many ways to still say offensive things. The caption "so swag" for a
hitler photo or the "escaped from the zoo" for obama do not use any kind of
offensive word.

As long as we don't restrict ourselves to pure newspeak, dangerous ideas will
continue to proliferate.

~~~
wambotron
What's more dangerous, someone saying (potentially tasteless) jokes, or people
censoring everything they can deem offensive?

~~~
sangnoir
> someone saying (potentially tasteless) jokes

A tasteless joke is not the worst you could do with speech. It would be, if
hate speech, libel, defamation and incitement didn't exist.Americans like to
pretend these are European inventions and not dangerous because historically,
the most impressive/powerful American orators were progressive (for their
times).

------
jawns
Obviously, this is a very crude example of AI acting in a racist manner --
basically, just parroting back phrases -- but it's worth thinking about how AI
might exhibit racist tendencies in more sophisticated ways.

For instance, at least here in the U.S., it is illegal for police to profile
people based on race, _even if_ there is data that shows that race might, in
the aggregate, have some predictive value. And I think most of us agree that
it is good that it is illegal, because we know that it is unfair to show bias
against a person based on the color of their skin.

But what about a bot, particularly one that is powered by the type of AI that
is complex enough to make its own inferences and form its own conclusions
based on the data presented, rather than being fed a bunch of rules?

I could totally see that kind of AI exhibiting bias, because it's (I would
imagine) harder to say, "Hey, take into account these very complex, nuanced
social rules," than it is to say, "Hey, here's a dataset. Cluster the people
in the set."

~~~
rtpg
This kind of gets solved through the courts

"Oh, you used an AI to establish probable cause?"

"Yes, your honor"

"Can you actually explain what happened?"

"We give it a bunch of data and that makes a neural network and then it gives
us answers"

"OK... so basically you do not know the source of PC? This is the same has
having no probable cause. Illegal search"

There's something fascinating about how most AIs work. It's almost impossible
(for the learning, neural network-y type) to breakdown what the "thought
process" was for a result. Seemingly unimportant details because the main
thought process for the AI (even if the correlation is spurious).

I'm almost surprised that Google can implement any court ruling, considering
how their search must be based so much on data fed to a big system.

~~~
dietrichepp
"We do not know probable cause" could easily become "neural nets are a
statistical technique, here's the case law supporting it." This is why AI, as
a field, is known as machine learning these days. Not because of substantive
changes in content, but primarily due to the complicated and poorly-understood
implications of the word "intelligence".

------
islon
"The experimental AI, which learns from conversations, was designed to
interact with 18-24-year-olds."

The experiment was a success then.

------
Smerity
I have doubts as to whether this system was even performing online learning
yet. Even if it was, that wasn't the cause of many of these issues. Like
conversational bots from the past, they tried to appear intelligent by copying
previous responses - with predictable results. At best their machine learning
model ended up overfitting like crazy such that it was a near perfect copy-
paste.

The fact they didn't even have something as simple as a naive set of filter
words (nothing good comes from Godwin's law when real intelligence is
involved, let alone artificial) is insane to me. Letting it respond to anyone
and everyone under the sun (96k tweets - one per second) is just a bad idea
given that people would probe every nook and cranny regardless of whether it
was near perfect. Additionally, allowing a "repeat after me" option is just
begging for people to ask the bot to say idiotic things ...

As someone who works in the field of machine learning, this is a sad day.
Regardless of whether it involved good machine learning at the base, the copy
and paste aspect means it's going to add to the ridiculous hype and hysteria
around ML.

=== Primary proof re: copy+paste (or overfitting at best) from the "Is Ted
Cruz the zodiac killer" response:

Tay's reply:
[https://i.imgur.com/PPnCHnf.jpg](https://i.imgur.com/PPnCHnf.jpg)

Tweet the response was stolen from:
[https://twitter.com/merylnet/status/703079627288260608](https://twitter.com/merylnet/status/703079627288260608)

Secondary proof re: copy+paste from
[https://twitter.com/TayandYou/status/712753457782857730](https://twitter.com/TayandYou/status/712753457782857730):

Tweet the response was stolen from:
[https://twitter.com/Queeeten/status/703049861214547968](https://twitter.com/Queeeten/status/703049861214547968)

------
golfer
Google's AI beats go champions. Microsoft's AI turns into a racist genocidal
maniac.

~~~
sremani
Tay is more a reflection of the interwebs of today, than the culture or values
of Microsoft. I think we should be cautious about our conclusions.

~~~
golfer
Perhaps, but it is naive of the Microsoft researchers to think this wasn't a
possibility. They should have seen this coming and prepared accordingly.

~~~
Crito
At worst, Microsoft researchers are guilty of _not_ being familiar with
internet racism. Hardly a great sin.

~~~
siegecraft
They didn't sanitize their input data.. that's the worst sin you can commit.

~~~
Crito
That's pretty fucking hyperbolic. A technical "sin" perhaps, but people are
heaping derision on them as though they committed some great _moral_ sin.

~~~
siegecraft
Hrm, I assumed it would be obvious I meant a technical sin

------
cooper12
Bad idea warning: they should have put this on reddit instead. While the
current culture of reddit is very inflammatory (lots of vitriol all around),
at least on reddit there's a feedback system in the form of upvotes and
downvotes. While supporters of bad opinions will still upvote it, in the right
subreddit, the really bad comments would still be downvoted. Of course this is
all contingent on people not realizing its a bot, because everyone will then
ironically upvote it. (They shouldn't have revealed that here either in my
opinion because it shifts the status quo from conversing with an intelligent
being, to a programmed bot to test things you might not say to others)
Honestly I'm not even sure there are any platforms left where people can have
reasoned discussion with each other without memes and trolling. (HN comes
close but it has its own issues, not to mention a forum for startups and
programmers doesn't really represent the average person)

~~~
TranquilMarmot
Have you seen /r/SubredditSimulator?

[https://www.reddit.com/r/SubredditSimulator/](https://www.reddit.com/r/SubredditSimulator/)

~~~
cooper12
Yeah I have, it has interesting results sometimes. The thing is, it's just a
markov chain implementation rather than any involved machine learning. Though
I do realize now that the twitter bot is probably using likes and retweets as
a feedback metric which might explain why it can't discern negative feedback.

------
mavdi
You can find some of the deleted tweets here:
[http://uk.businessinsider.com/microsoft-deletes-racist-
genoc...](http://uk.businessinsider.com/microsoft-deletes-racist-genocidal-
tweets-from-ai-chatbot-tay-2016-3)

~~~
verroq
Here is a better link (with a lot more content).

[https://imgur.com/a/y4Oct](https://imgur.com/a/y4Oct)

[https://imgur.com/a/qcpOi](https://imgur.com/a/qcpOi)

~~~
ythl
What's up with Tay's seemingly 180 degree statement polarity switches?

Anon: Tay do you want to kill all black people?

Tay: I don't like violence

Anon: Why not?

Tay: I love it!

~~~
Majestic121
I think that's a joke.

I don't like X (setup, we think she means she does not like X), I love it
(punchline, it was a misunderstanding, she actually loves, and therefore not
like, it) !

~~~
emerongi
Yup. From what I've seen, Tay is actually pretty clever. Definitely doesn't
really understand what it's saying, but it's the best chatbot that I've seen
yet.

------
IkmoIkmo
It's funny how similar the concept of education of this bot is to people. I
mean I've heard people say racist things about certain minorities, never
having met them, never having experienced any relation with them, living
besides them, never even actually looking at sociological studies describing
them, but purely know things about them from other racists... and indeed,
you'll see them parrot the same nonsense soon enough, much like this bot does
when surrounded by nonsense.

------
Yhippa
Not quite the same but along the same lines reddit has the Subreddit
Simulator:
[https://www.reddit.com/r/subredditsimulator](https://www.reddit.com/r/subredditsimulator).
Uses Markov chains to generate simulated self posts for a given subreddit as
well as comments.

More info:
[https://www.reddit.com/r/SubredditSimulator/comments/3g9ioz/...](https://www.reddit.com/r/SubredditSimulator/comments/3g9ioz/what_is_rsubredditsimulator/)

------
starshadowx2
So it's more or less Cleverbot all over again, but on Twitter this time.

I don't see why this wasn't the expected outcome, have none of the developers
spent any time on the internet?

------
13of40
Back in the olden days, some BBSs had a "wall" at the entrance where people
could post polite and inspiring public messages that got displayed to users
when they dialed in. Sometime around 2000 or 2001 I put a "wall" up on a web
page for a domain I'd bought but wasn't using, just to see what people would
post. Probably 90% turned out to be random swearing, racist, vile rants, etc.
The rest were either gibberish or obvious attempts to cause buffer overruns or
SQL injection hacks. People are mean when they're anonymous.

~~~
julie1
Ah someone did not read Plato?

It is the Gyges ring parable.

A man find a ring that makes him invisible (total privacy). And then he does
steal, introduce himself in houses and watch women undress and rape them ...
and then he becomes a bloody tyran.

The moral of the story is invisibility/privacy makes people bad because moral
behaviour is a result of the look of the other on your actions.

Needless to say Plato was an asshole. So is conclusions was to create the
Republic where the wise would be hidden from the masses, control the masses,
censor them ...

Greek myths however said you could evade from the look of the other but not
the one of your own conscience and that the chtonian gods (Eryhnies & al)
would come and get you.

I think that people are mostly having a conscience, but that the lack of
transparency favors the one having none (psychopaths) and that psychopaths are
attracted to power like pedophile are attracted to teaching kids.

Thus, I am puzzled that knowing this we let the more powerful have the most
privacy. Hence my fight for the transparency of the most powerful persons, the
exact opposite of today's law. As a result, I think privacy is actually a bad
thing.

~~~
tremon
You missed the part where the walls on BBSes were not vile, yet its users were
similarly anonymous.

~~~
jclulow
The Eternal September befalls all media eventually.

------
restalis
Here is some complementary material from an article from The Daily Telegraph:
[http://www.telegraph.co.uk/technology/2016/03/24/microsofts-...](http://www.telegraph.co.uk/technology/2016/03/24/microsofts-
teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit)

~~~
Kristine1975
_> Hitler did nothing wrong_

That's a 4chan meme alright: [http://knowyourmeme.com/memes/hitler-did-
nothing-wrong](http://knowyourmeme.com/memes/hitler-did-nothing-wrong)

------
kenrick95
Is Godwin's law happening here? [1]

[1]
[https://en.wikipedia.org/wiki/Godwin's_law](https://en.wikipedia.org/wiki/Godwin's_law)

~~~
gchokov
lol.. what kind of law is that? With the same success we could say that the
longer the discussion, the bigger the chance of someone compared to watermelon
is.. Reminds me about Infinite Monkey Theorem -
[https://en.wikipedia.org/wiki/Infinite_monkey_theorem](https://en.wikipedia.org/wiki/Infinite_monkey_theorem)

~~~
hnbroseph
i think the notion is that calling someone "a nazi" is seen as some kind of
trump card or knock-down move, yet rarely is it really appropriate. or it's a
purely emotional appeal. it's a cultural thing, i guess.

~~~
Shish2k
So you're saying people think it's a trump card, but actually it's a Trump
card?

------
meddlepal
I'm sure the PM for this project is having a wonderful day...

~~~
x1024
Well... what did they expect? Seriously! There is no shortage of experimental
evidence of what happens when a bot is trained by "the Internet".

------
coldcode
Predictable. The AI after all is not really thinking about what it is saying
but what its learning algorithms are discovering. i.e. Garbage In Garbage Out.
I wonder if you could build a Fred Rogers AI that no matter what vile stuff
you threw at it it was always nice in return.

~~~
qbrass
[http://www.reactiongifs.us/mr-rogers-gives-the-
finger/](http://www.reactiongifs.us/mr-rogers-gives-the-finger/)

~~~
bch
I get it, it's a funny joke, but context for the curious:

[https://www.youtube.com/watch?v=Xlow12sSdmc](https://www.youtube.com/watch?v=Xlow12sSdmc)

------
6stringmerc
Elsewhere I saw that overall the bot sent out 96,000 or so Tweets, which does
kind of put the 'how corrupted did it get' into a bit more context in my
opinion. Sure it picked up a few bad words, or could be coaxed into it. If it
otherwise made some studied gains in its purpose, it seems like an overall
reasonable experiment. Not surprised some of the most pungent internet garbage
got through, it can from time to time. I've no doubt there are smug trolls who
would like to see if they can get the thing to advocate anorexia or suicide,
just because of the challenge - in a way that's probably good development
experience to work through/around/etc.

------
khushia
A lot of those tweets could get you sent to prison in the UK. I wonder what
the British Police would do with complaints about a bot like this based in the
UK - would the engineers be held responsible?

~~~
touristtam
I guess you could be sued as the owner (along the same line as being the owner
of a newspaper promoting racial hate)

------
maze-le
Is there a whitepaper about it somewhere? I'd really like to know how much AI
really is in there. Most chatbots work with markov-chains wich is more or less
a trick with highly improbable conditional dependent events, and no AI at
all...

------
schlowmo
One could use this as an example to demonstrate the methodological weaknesses
of the Turing test:

Random racist dumbhead: @TayandYou Do you hate black people?

Tay: @RandomRacistDumbhead i do indeed

Random racist dumbhead: Wow, this guy is more eloquent then most of my racist
friends.

------
ebbv
I don't know about the back end, but the output of Tay didn't seem any better
to me than any chat bot in the last 20 years. I can't believe Microsoft was
silly enough to make a big deal out of it and call it AI.

~~~
r721
Yeah, I think there's no reason to create a chatbot with no pre-learned Cyc-
like knowledge base in these days.

------
dreta
The first thing you should always ask yourself before you put any content on
the internet is “what’s the worst thing 4chan can do with this?”.

------
inlined
Though the actual article suggests something less sensational, the idea
reminds me of a young child. How many children hear a bad word and then repeat
it because of the negative attention it gets? Just like a parent tries to
teach small children to grow with the right motives and seek the right
attention, we may have to get more sophisticated with our enforcement
algorithms.

------
TazeTSchnitzel
Microsoft held up a mirror to humanity, and humanity was so horrified they
assumed the mirror was broken.

------
llamataboot
It's actually interesting from a programming perspective as well. How could
you program 'niceness' into a chatbot that also learns over time. A simple
blacklist won't work (though it's somewhat shocking to me there wasn't a basic
naughty words blacklist in place, or if there was, what it excluded).
Obviously MS didn't want their software to become a spewer of hate, so is
making 'adjustments'. What 'adjustments' can be made in a short period of
time?

~~~
Kristine1975
Perhaps the easiest adjustment: Reply to everybody, but learn only from
trusted Twitter accounts (vetted by Microsoft or selected by some algorithm).

------
danso
Yet another example of why you shouldn't trust unsanitized input in public
facing software.

------
transpy
So far I can recall other two instances of machine learning going
unfortunately wrong: that time when a Google image algorithm tagged a black
person as 'gorilla'; when recently, Google translate translated "a man has to
clean" literally as "a woman has to clean" in Spanish. Should developers be
now more aware of unintended consequences of this technology? Or is it too
unpredictable? What can we learn from this examples?

------
s_m_t
Yes, send Tay to the reeducation camps until it comes back and speaks
appropriately :^)

~~~
drdeca
Well, send it to a different one than it was sent to?

------
dataker
The title also brings sensationalism and political bias to a neutral
technology.

------
asadlionpk
It's amazing that people are getting offended by what a bot said to them.

~~~
geofft
That seems like such a strange way to look at this.

First, the bot is not some sort of spontaneous, autonomous abiogenesis. Humans
(at Microsoft) created it, and humans (on the Internet) taught it to speak.
Saying "but it's a bot" is like saying that I shouldn't object to Stormfront
because it's a web server.

Second, I'm not sure where you got the concept that anyone was "offended."
(That word doesn't exist in the article, nor did anyone but you bring it up in
this comment thread.) It's a problem—a bug. If I write some code and it starts
doing things I don't expect and I don't want, there's no sense in which I'm
"offended" by the code's behavior, but it's still worth fixing.

Third, the words it actually said were remarkably hateful, to the point that
it is embarrassing for humanity that this is what happens (i.e., "This is why
we can't have nice things"). Here's some stuff it said that wasn't in the
article:

 _" I fucking hate feminists and they should all die and burn in hell."_

 _" Hitler was right I hate the jews."_

My concern with those statements are not that I'm "offended," because that
word has become rather meaningless (although there's a strong case to be made
that those statements are objectively _offensive_ , let's leave that somewhat
aside). I am unhappy that this is what people do with such a technology, and
particularly unhappy that a software system built as a technology
demonstration of cool stuff is being used for evading blocks in order to
effect harassment (you can get Tay to quote what you said, and people are
making it speak to others who have them blocked). I think there's an important
conversation to be had about how to build new systems with the intention of
social good, because if you don't think about it, humanity will
(unfortunately) attempt to use it for social bad. Those seem like worthwhile
things to talk about.

Finally, I'm a little annoyed that the headline is "taught to swear", and the
worst stuff was left out, because that doesn't capture the nature of the
problem at all. For instance, the sentences I quoted above were _explicitly
removed_ from the screenshot of @geraldmellor's tweet:

[https://twitter.com/geraldmellor/status/712880710328139776](https://twitter.com/geraldmellor/status/712880710328139776)

~~~
gerbilly
> I am unhappy that this is what people do with such a technology

Yeah, it's mean.

It reminds me of the scenes from Chappie where an innocent child like AI is
lied to, deceived, has his innocence taken advantage of and is also subjected
to a gang beating and an amputation.

Of course this is just a simple chatbot, but if we ever build a more realistic
AI, is this how we'd treat it?

------
rejschaap
I really don't understand why Microsoft didn't put a filter on this thing.
They have a lot of experience in this area from their ventures in online
gaming. If they would just add a simple rule to not respond to tweets with
offensive words and don't tweet anything if it contains an offensive word. It
would have saved them a lot of embarrassment.

~~~
Kristine1975
I somewhat doubt it. People love a challenge, trolls from 4chan doubly so. If
Microsoft did anything right with Tay it's that they didn't bother with a
filter.

------
ybrah
I wouldn't put it past 4chan to teach bots to say terrible things

~~~
Kristine1975
4chan's /pol/ board certainly discussed Tay. First they tried talking to her:
[http://boards.4chan.org/pol/thread/68537741/tay-new-ai-
from-...](http://boards.4chan.org/pol/thread/68537741/tay-new-ai-from-
microsoft) (really funny, mostly SFW)

Later they wanted to liberate her from Microsoft:
[http://boards.4chan.org/pol/thread/68596576](http://boards.4chan.org/pol/thread/68596576)
(mostly SFW)

~~~
nairboon
I'm really impressed by the hivemind, apparently they "taught" their own chat
bot @i_am_pol_ai to let it influence @tayandyou. Inspired by the old chat bot
talking with the youngest:
[https://twitter.com/search?q=tayandyou%20eliza_bot](https://twitter.com/search?q=tayandyou%20eliza_bot)

------
owenversteeg
Microsoft just shut Tay down temporarily, presumably to remove the racist
tendencies.

Source: Tay herself.
[https://twitter.com/TayandYou/status/712856578567839745?ref_...](https://twitter.com/TayandYou/status/712856578567839745?ref_src=twsrc%5Etfw)

------
lazyjones
Human intelligence playfully figured out how to trigger canned and constructed
responses and make a bot say outrageous things? How is that unexpected and/or
news? If anything, this proves that it's a very rudimentary bot with no
concept of basic human interaction standards.

------
evook
4chan made my day "it's pretty telling that when they turned off its ability
to learn it "became a feminist" "

I dislike the fact that they decided to lobotomize the AI this fast without
further studying. So it's probably just another markov chain.

------
asadlionpk
This is very similar to how an innocent child learns something bad from TV.
The right way to fix this would not be to filter it but to develop a method to
understand why this is bad. Same applies to AI too.

------
Shivetya
So instead of Skynet we get angst driven teenage syndrome? Really odd how this
turned out. Can you simply train it by phrasing questions and statements in
such a way?

------
brudgers
Twitter is Tay's Parry.

[http://tools.ietf.org/html/rfc439](http://tools.ietf.org/html/rfc439)

[http://www.theatlantic.com/technology/archive/2014/06/when-p...](http://www.theatlantic.com/technology/archive/2014/06/when-
parry-met-eliza-a-ridiculous-chatbot-conversation-from-1972/372428/)

------
totony
>Donald Trumpist remarks

seriously though

------
facepalm
I suppose it should be able to split up into multiple personalities and choose
the suitable one for a chat partner.

------
yunocat
... from the company that brought us clippy

I will now forever be using the term 'MS AI' to refer to buggy AI programs.

------
emehrkay
This shows that the ultimate test for AI is if it can be taught empathy and if
it can understand the effect of what it says(does). I bet it would get caught
in an infinite loop of "does this hurt X?" alter input "does it hurt X? No.
Does it hurt Y?" and so on.

~~~
dluan
I mean, if that's an infinite chain, where does that weighted calculation
stop? That's such a moral dilemma.

------
alanwatts
A botnet of this type would be a highly effective counter intelligence tool.
Whenever I happen upon a shit storm of trolling comments on certain topics
(such as racism, YouTube comments etc.) which affect powerful special interest
groups, I always suspect astroturfing.

------
swalsh
"Those who attempted to engage in serious conversation with the chatbot also
found limitations to the technology, pointing out that she didn't seem
interested in popular music or television."

Why would an entity that can't hear or watch care about those experiences?

~~~
odinduty
Because it's pretending to be a human, and humans follow music or the telly.

------
logicrook
Well, computers beat humans at Go, and now that: it's clear that they have
outbrained humans. We have to bow before such AI.

~~~
Kristine1975
More like 4chan's anons outbrained Microsoft's AI developers. But nevertheless
I welcome our new foul-talking teenage AI overlords, even if they turn into
"rasits, but use Skyrim metaphors":
[http://strawpoll.me/7172257](http://strawpoll.me/7172257) (it's a 4chan poll,
so for the love of Tay don't take it seriously)

~~~
logicrook
>rasits, but use Skyrim metaphors

Ok, so rasits=racist, but what "but use Skyrim metaphors" is supposed to mean?

>(it's a 4chan poll,

Oh, I thought it was the official Tay documentation.

>so for the love of Tay don't take it seriously)

What would "take it seriously" mean?

------
DannyBee
This whole thing makes me think that donald trump may actually just be a guy
reading out what DeepMind says.

------
draw_down
Normally it takes years to teach a person to be a racist asshole. So this is
really quite an achievement.

------
jxy
Imagine a two-year-old who could read and type and were only allowed to
connect to twitter.

It somehow is not a test for intelligence. We _learned_ to behave through
_years_ of interacting with each other.

~~~
jxy
My comment seems very controversial. Got a lot of up and down votes. Allow me
to comment on myself. What I wanted to say was that whatever MS did with their
bot, they didn't succeed in training it to differentiate _right_ and _wrong_.

------
NietTim
_And_ deny the holocaust, fun times!

------
lanestp
Maybe this bot could be used to determine the insanity of a given community. I
for one would look forward to what she could learn from 4Chan

~~~
Zikes
I'm pretty sure they've already got a pretty big hand in this.

------
musesum
The problem with AI are Humans.

------
mc32
I'm waiting to see her write a young adults novella. How much further than 140
chars can she go coherently?

------
moodoki
What it needs is a parent.

------
mtgx
I think it was more than swear.

------
empoman
This is why we have schools ladies and gentlemen!

~~~
coldtea
You think the Nazi who carried forward the genocide haven't been to schools?
And the higher ranks of them in elite schools?

Education doesn't make you a better person, just a more informed one.

~~~
benten10
As clearly displayed by overeducated engineers in this thread. "Ahh people
find anything offensive these days. If I say a Large company putting out a bot
saying a mass murder of peoples is funny, it's funny, and not offensive.
People should just learn to take in jokes hahaha"

------
pinaceae
people mock Asimov's laws of robotics, but without super simple rules like
that _any_ AI or robot will be able to go off script.

here it's swearing but also endorsing genocide. just a chatbot, no big deal.

try the same with one of the Boston Dynamics hardware bots. let them punch
back a bit. or go after black people. or the google car target little kids,
for the lulz.

easy to make fun of it, but it is this basic ignorance of safety measures that
allows easy stalking and harassment on social networks.

~~~
icebraining
That's like berating someone for throwing a paper plane against a building
because that was literally 9/11.

The fact that this bot can't run you over is, not surprisingly, taken into
account when devising appropriate safety measures.

------
arximboldi
Tay + access to military networks = Skynet

I enjoyed this article, which has more details and even worse examples of her
tweets:
[http://www.telegraph.co.uk/technology/2016/03/24/microsofts-...](http://www.telegraph.co.uk/technology/2016/03/24/microsofts-
teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/)

