
Learning from Tay’s introduction - hornokplease
http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
======
bliti
This reads like science fiction:

Learning from Somebot's introduction.

Last week we deployed Somebot to $location. Our confidence in Somebot was high
because we had thought about all likely scenarios in our comfortable offices.
We are deeply sorry and assume the full responsibility from Somebot's actions.
Moving forward we will make sure to include safeguards to reduce the amount of
pain caused by Somebot's deployment.

Our deepest condolences to the families of the affected and to the survivors.
Megacorp cares about your well being. To help cover expenses from the tragedy
we will deposit $money in your Megacorp account.

God bless the federated nations of Megacorp.

~~~
pervycreeper
The response that Microsoft gave was really the worst of both worlds,
combining the systematic corporate shirking of responsibility which enables
and gives cover to a great deal of evil, with a complete accedence to and
embracing of political correctness. Furthermore, their claim that they did not
anticipate "this specific attack" is either a lie, or their creators made an
extremely obvious mistake (yes, easy to say in hindsight, but it's really hard
to believe that this possibility would not occur to competent implementers).

~~~
maxerickson
What would they need to do to properly take responsibility?

They turned it off pretty fast and agree that it was a mistake, what's the
problem beyond that?

------
Houshalter
Microsoft please don't worry about this. No one but idiots are offended by
this. It's understood that its just a stupid chatbot mimicking human
responses. The AI isn't terrible, people are. And unless you keep a dataset of
every offensive thing a person can say, and every offensive image they can
tweet, there's no way to prevent people from tweeting it pictures of Hitler...
or Scunthorpe. But who cares.

This is just as stupid as that manufactured outrage over Googles image tagger.
It misclassified a picture of a human as an animal, and people were up in
arms. Google had to censor it so it can't tag animals now. They shouldn't have
to do that, let idiots be idiots.

~~~
intopieces
>No one but idiots are offended by this.

Well, if those "idiots" include potential investors or customers of future
MSFT AI products, "don't worry about it" is not sound advice. This is a very
public failure of a promising Microsoft product.

~~~
bliti
I don't see this as a failure. Its about running tests and gathering data. If
MS kills the project after this mishap it would be a pity.

I mock the way they communicated the incident because it sounds too sci-fi.
Almost too Ghost-in-the-Shell like. But I do not mock the technical effort in
any way.

------
rmellow
Morals are learned by social contact, and Tay did this very well. Sure, what
our parents taught most of us makes its behaviour reprehensible in comparison.
But Tay was, so to speak, 'raised' by people demonstrating vile ideas and this
must be taken into account. Would you expect any less from a tortured animal?

Many use this as an example of the dangers of developing AI. Sure it's
dangerous, but so are dogs raised for fighting. I don't see anyone arguing
against dog breeding for that matter.

~~~
mtgx
> Morals are learned by social contact, and Tay did this very well.

Only in the sense that it "adapted". It did "very poorly" in the sense that we
really don't want our Strong-AI overlords to end up like that.

But this begs the question - _can we_ stop strong AI from _not becoming_ the
next Hitler? Humans (involuntarily) stop themselves from becoming the next
Hitler because they have compassion for other humans, even when they are
different than them or "inferior" to them. Also the whole checks and balances
thing in most countries, but that could be rather irrelevant for Strong-AI.

Unless an AI learns compassion as well, perhaps just like with AlphaGo doing
moves based on its "probability of success in the long run", a strong AI would
simply eliminate humans that are "most prone to crime", most prone to being
poor and be a drag on the society, in the name of "efficiency", and so on.

All that said, I think what Microsoft built here was really a rather weak AI
that was hardly any better than all the chatbots we've seen so far, with the
main difference being that the more you tell it something, the higher the
chance it will incorporate that into its vocabulary, which is kind of a "meh"
feature of AI/machine learning. It doesn't show real(-like) "thinking".

~~~
whitegrape
[https://intelligence.org/research/](https://intelligence.org/research/)
basically exists to try and make sure AGI doesn't become Hitler (or much much
worse).

~~~
Houshalter
Yes but they admit that the problem is extremely hard. They aren't sure if
they can solve it, or if it can be solved.

------
zxcvvcxz
> The great experience with XiaoIce led us to wonder: Would an AI like this be
> just as captivating in a radically different cultural environment? Tay – a
> chatbot created for 18- to 24- year-olds in the U.S. for entertainment
> purposes – is our first attempt to answer this question.

Maybe you did answer that question.

No, the answer is not that 18-24 year olds in the US are racist and what not.
But that the ones responsible for a disproportionate amount of internet
content are willing to make crude, politically-incorrect jokes to get
attention and piss off their masters.

I wonder what will happen when governments start applying machine learning to
try predicting things like welfare usage and crime. Certain patterns might
emerge we don't want to see! We'll have to apologize for our racist
algorithms.

It would be much more interesting to examine the results of this experiment.
Why are so many people on the internet interested in spreading hateful
content, which is being accurately reflected by our bot? No, instead we do
what I did in grade 8 science class: fudge the results so they're what the
teacher expects.

~~~
dr_zoidberg
A coworker mentioned, a couple[0] (1-2) years ago, that in some european
country they were starting to use a machine-learning-driven system to predict
zones where more police was needed to better respond to crime. It wasn't
Minority Report style, just a broad statistic, but it was quite interesting.
Unfortunately, I've forgotten the country and the name, so I can't be any more
specific.

[0] [https://xkcd.com/1070/](https://xkcd.com/1070/) \-- can't say "a couple"
without refering to this strip, obviously.

~~~
rasz_pl
That mysterious european country is called US of A

[http://www.theguardian.com/cities/2014/jun/25/predicting-
cri...](http://www.theguardian.com/cities/2014/jun/25/predicting-crime-lapd-
los-angeles-police-data-analysis-algorithm-minority-report)

~~~
dr_zoidberg
Hmmm... the system in general it sounds like the one I had read about a few
years back, but I'm pretty sure about it was in an european country. Also, the
seismic activiy and LA landmarks don't ring a bell at all (though it's
interesting to read about it).

------
hacker42
Here is some more information on the problem:
[https://news.ycombinator.com/item?id=11361398](https://news.ycombinator.com/item?id=11361398)

Apparently some of the questionable responses are partly identical to months
old tweets, so it seems in addition to some parroting 'vulnerability' (which
was exploited by 4chan) they also had a poorly sanitized training set to begin
with. It seems odd that this was not mentioned in this public statement.

~~~
devindotcom
nice, thanks for the link. gonna add it to a post.

------
devy
Unsurprisingly, there had been initial setbacks when XiaoIce was first
released in May 2014 on WeChat platform, where users were abusing/attacking
"her", causing XiaoIce being pulled by Tencent after being online only for a
few days. [1]

Since WeChat is a closed social network, it wasn't too clear what type of
"attack/abuse" were conducted. However, almost 2 years later, Microsoft still
didn't quite get it about proper censorship in Tay's big turning test[2] in a
public social network.

[1]
[http://tech.ifeng.com/mi/detail_2014_06/01/36613379_0.shtml](http://tech.ifeng.com/mi/detail_2014_06/01/36613379_0.shtml)
(in Chinese, use Google Translate)

[2] [http://mashable.com/2016/02/05/microsoft-xiaoice-turing-
test...](http://mashable.com/2016/02/05/microsoft-xiaoice-turing-
test/#BQOCB7OG0kqg)

------
webkike
This reads like an apologee... But what is there to apologize for? They said
that Tay was meant for entertainment, and I doubt that any wholesome varient
would be a tenth of the hilarity of a neo-nazi sex crazed chat bot.

~~~
flatline
Many of her tweets were pretty inflammatory, this response seems appropriate -
not overdone, and concise, which is sometimes a stretch for large corporations
like Microsoft.

I agree that the bot was highly entertaining and met this criteria exceedingly
well, but not for the reasons its creator intended. I do suspect there are
some interesting AI applications actually going on behind the scenes, and
would still be interested to see what the bot can do without all the vitriol.
See for example this tweet:
[https://imgur.com/iVof3D4.jpg](https://imgur.com/iVof3D4.jpg).

~~~
mindslight
> _Many of her tweets were pretty inflammatory_

What, were people forced to print the tweets out on sandpaper and wipe their
asses with them?

It's pretty fucked up that thought policing has gotten so entrenched into our
psyche that it's "obvious" an experiment should be discontinued, apologized
for, and be pondered as _a priori irresponsible_ , all because it generated
vulgar phrases!

Corporations have always been vulnerable to media-driven mob shenanigans, but
we're qualitatively entering a new regime where any communication, no matter
what the context, will be rapidly highlighted, isolated, and hung out as
something offensive to some emergently-forming group of freelance complainers
looking for their fifteen minutes.

Even HN has succumbed to this kindler, gentler phenomenon of speech
restriction - I'm guessing my lead-in sentence will not be well received do to
its overt vulgarity. Civility certainly has its place (especially as a
default), but not when it confuses direct objectivity and permits out-of-touch
groupthink to flourish. As hackers we should be cutting through to the core of
things rather than sugar coating in verbal fashion to get past the filters of
the voluntarily-lesser apes.

~~~
warmblood
Standing behind the open and casual use of racial slurs isn't advocacy of
freedom of speech. It's advocacy of a specific kind of hate speech that is
only used when someone intends to vilify and direct hostility towards a
marginalized minority.

~~~
GunboatDiplomat
Free speech isn't free if it doesn't include speech you find offensive.

~~~
panic
Nobody is advocating that Microsoft should be thrown in jail or taken to
court. This isn't about the legal protection of free speech.

~~~
mindslight
I don't think anybody was talking about the legal protection of free speech,
apart from that xkcd comic which uses it as a straw man to justify intolerant
groupthink and corporate censorship.

In these days of digital sharecropping and social media saturation, the
proscriptions on de jure government activity are much less involved with
routine everyday freedom of speech.

------
daodedickinson
"we planned and implemented a lot of filtering"...

I just don't get how you even allow it to use the word "Hitler". Or "cucks".
Or "fuck" or "pussy" or "stupid whore". Probably not "cock" or "naughty" or
"kinky". The k word? How is that not in your filtering?! It seems impossible
to me that an "exploit" would allow that; it was a full-blown oversight.

Everything else said... she totally passed Turing test and fit right in. Yet
another letter handwritten on the wall in these, the last days of democracy.
If you want an AI or NI that represents the best of humanity, you have to have
it learn from a small number of the best works and best people, not from mass
media or pop culture. Send Tay to St. John's in Santa Fe or Annapolis, not
Twitter.

------
petercooper
I'm not entirely convinced. I did Twitter searches for some of the phrases Tay
"said" and found random tweets made by other people weeks ago it was quoting
through a filter (lower casing, mostly). So it can't _entirely_ be down to
trolls attempting to game the bot - it was actively plucking content from
tweets that pre-dated its release.

------
user8341116
Is the vulnerability they're talking about just having her repeat what you
tell her to say? Because that's some oversight....

~~~
daodedickinson
No joke. Did they really not see what happened to Coca-Cola?

[http://www.theguardian.com/business/2015/feb/05/coca-cola-
ma...](http://www.theguardian.com/business/2015/feb/05/coca-cola-makeithappy-
gakwer-mein-coke-hitler)

------
placeybordeaux
This leads me to wonder if there is less effort put into trolling on the
Chinese Internet. Does anyone with experience in both internets (weebo &
twitter for instance) have anything to share?

Also does anyone know of some good English language digests of what is
happening on the Chinese Internet? I was really interested by brother orange
when that happened, and only knew kind of late.

~~~
PeCaN
Chinese internet is as bad as the English one (plus a few funny expressions to
avoid censorship—frequent use of certain puns for restricted subjects for
example). My Chinese internet experience is largely relegated to the Chinese
Dota community though, and Dota players in general tend to have a higher
percentage of trolls. So my sample is probably biased.

~~~
jsonne
>(plus a few funny expressions to avoid censorship—frequent use of certain
puns for restricted subjects for example)

Interestingly this is true of the English internet too. 4chan trolls
frequently come up with puns and things to side step moderator censorship on
various platforms as well.

------
gaze
It's strange to me that they claim to have implemented some filtering but
somehow Tay was saying all sorts of things about Hitler. How do you not
anticipate this? I'd imagine the most rudimentary filtering would block Tay
from talking about Hitler.

~~~
TheOtherHobbes
You'd have to include a lot of rudimentary filtering to eliminate every
possible incredibly offensive topic.

It's entertaining that the hack attempt created a more convincing personality
- an out of control teen troll - than the original programming.

So maybe Tay really does have a dark and twisted teenage soul. Who knows?

~~~
gaze
Yeah but if you're gonna dream up a list of like 20 offensive things... Hitler
has a good chance of being on it

------
bishnu
If you're not asking yourself "what could a small but well-coordinated group
of bad actors accomplish with our online tool" you're just being negligent.

This 'but we did it in China' rationalization is so flimsy. What happened with
Tay was easily predictable given the nature of Twitter.

------
resu_nimda
_Unfortunately, in the first 24 hours of coming online, a coordinated attack
by a subset of people exploited a vulnerability in Tay. Although we had
prepared for many types of abuses of the system, we had made a critical
oversight for this specific attack._

Well that's total BS. Releasing a thing like this on the open internet without
a simple "don't say Hitler" rule? It had a feature where it would repeat
anything you say. Abusing that doesn't require a sophisticated coordinated
attack, as they imply. What kinds of abuse _did_ they prepare for, then?

This is a colossal failure to demonstrate a basic understanding of how (some)
people act on the internet. I just don't know how they expected anything other
than this exact outcome.

~~~
taf2
I'm by no means someone who would normally defend Microsoft but for real we
are all learning. Failure is a successful outcome of research. Discovering
vulnerablities is a valuable outcome.

~~~
resu_nimda
It's funny because I'm in the camp that has been impressed with Microsoft
lately. And of course it's ok to make mistakes, even really big ones.

But I would not say that this failure was a successful outcome, at least not
nearly as successful as it could have been. They had to shut it down within
hours and all we really learned is that people on the Internet like to troll
with incredibly offensive stuff. Most of us knew that, I'm pretty sure. If
they had actually prepared for abuse we might have learned more interesting
things.

What really gets me though, is just the obnoxious spin on the press release
implying that they had prepared so well for abuse but a sophisticated
coordinated super-hacker attack found the tiny vulnerability.

~~~
oneeyedpigeon
I, too, want more details. What evidence does Microsoft have that the 'attack'
was coordinated? What evidence do they have that it was an 'attack' at all? Is
the 'vulnerability' they refer to merely the part of the algorithm that
parrots input? That's not a vulnerability, that's a core function of the
software.

More interestingly, at what point does repeating my opinion (however
heartfelt, misguided, or unpopular) become a 'coordinated attack'?

------
lowglow
When I was reading those tweets I felt like I was just reading 4chan posts. I
laughed because it was obvious it had been compromised in some way, then I
stopped paying attention.

------
comex
What an enormous number of news reports have been written containing
"Microsoft" and "AI" in the same sentence, all because a glorified
SmarterChild had an entirely predictable vulnerability.

If I were paranoid I'd say Microsoft wanted this to happen.

------
w_t_payne
I don't think that Microsoft need to feel bad about this at all. I think the
technology that they demonstrated was pretty amazing ... and I look forward
with excitement and anticipation to the next outing. I'm sure that this is
going to be an iterative process that will probably take the best part of a
decade to complete, and that isn't a bad thing. It just reflects the fact that
this technology is hard to master.

------
hacker42
> people exploited a vulnerability in Tay

Which vulnerability are they talking about?

~~~
cb18
The 'vulnerability' of the program to respond to commands of the sort, hey
tay, repeat this [inflammatory comment]

~~~
BinaryIdiot
Yup and people would then take a screenshot of what Tay repeated and posted it
everywhere saying "haha Tay loves hitler!". If you discount the repeat feature
then Tay didn't actually say much that was horrible. Yeah there were a few
things but not nearly as much as it was made out to be.

------
martco
"We will remain steadfast in our efforts toward contributing to an Internet
that represents the best, not the worst, of humanity."

But doesn't the full human experience include both the best and the worst?

~~~
exolymph
Yes, but so what? Are you saying they should pledge to support expression of
the best _and_ the worst of the humanity?

~~~
renaudg
"Best" and "worst" is highly subjective.

Everyone here is referring to the "politically correct discourse, America,
2016" definition of those words.

------
ghrifter
> a coordinated attack by a subset of people exploited a vulnerability in Tay

You mean /pol/ just spamming the 'repeat after me' command to get the bot to
parrot anything they wanted?

------
lsseckman
Is there any other evidence of this being a coordinated attack?

~~~
asddddd
/pol/ found cool new thing, proceeded to mess with it for laughs.

I wouldn't characterize it as a 'coordinated attack'.

------
mrexroad
So if they train an "anti-model" based on 4chan comments over a period of
time, what would be the next phase of trolling? Excessive politeness?

------
daveloyall
> _Unfortunately, in the first 24 hours of coming online, a coordinated attack
> by a subset of people exploited a vulnerability in Tay._

This was a missed opportunity for a corporation to say "for the lulz" in an
official communication, and then provide a definition.

------
cableshaft
Anyone see this in action? What sort of vulnerability was being exploited?
What types of things was the chatbot saying?

~~~
dmcginty
The bot learned from interacting with users. A bunch of people took to
bombarding Tay with racist/sexist/antisemitic remarks which worked their way
into its vocabulary. I'm not sure if you could really call it a vulnerability
as this is pretty much what the bot was designed to do. It's more of a flaw in
the inability to properly filter the content that works its way into the AI.

~~~
rewrew
Yeah, I think them calling it a "vulnerability" makes the entire post seem
disingenuous.

~~~
sehugg
I can understand them wanting to call it a "hack" for CYA purposes, but it's a
cowardly position. Should we invoke the CFAA every time a company is
embarrassed?

------
Terribledactyl
> We take full responsibility for not seeing this possibility ahead of time.

Not responsibility for the bot's actions, but responsibility for us not
predicting what it would do. Subtle, but sets the tone for we're not
responsible for our AI, it just did it.

But isn't part of the point of the AI to do things difficult to predict?

~~~
theoh
Just my opinion, but AI should be controllable and should probably start by,
as the "Bayesian brain" theorists say, developing a robust and reliable model
of its environment. The Bayesian brain aspect of this is the belief that what
AI needs to do is to minimize its own prediction errors.

I take that to mean that an AI should learn to be safe and predictable on
human terms before it is allowed to start to diverge from human expectations.
Even that sounds a bit scary.

------
ChuckMcM
I found the experiment by Microsoft interesting. Perhaps next time they will
deploy it on 4chan or a similar forum to measure its response to subterfuge.
It reminded me of people who make the "secret question" scatological so that
customer service reps will be unwilling to ask it.

------
djscram
"We're sorry we held a mirror up to Twitter, and then you saw what lurks in
that morass," I don't think the mirror is the problem.

------
facepalm
They shouldn't have to apologize, it was just a chat bot. Nobody in their sane
mind should assume the statements of the bot reflect Microsoft's attitude.

Of course the episode points to shortcomings in the bot, that should be fixed.

It would be sad if PC would be hardcoded into the bot, though - as Asmiov's
fourth law, perhaps? "Robots have to be politically correct at all times"?

------
alex_hirner
Ideally Tay would learn why she was removed from social life, i.e.
reinforcement learning from ostracization. In fact, she already triggered that
indirectly by having people at Microsoft update her.

[http://rationalwiki.org/wiki/Roko's_basilisk](http://rationalwiki.org/wiki/Roko's_basilisk)

------
ArikBe
This imgur album provides a good overview of the kinds of interactions that
occurred before the shutdown:

[http://imgur.com/gallery/VhlAW](http://imgur.com/gallery/VhlAW)

From what I understand the bot started interacting with /pol/ (4chan) - and I
guess - /b/ as well.

------
gremlinsinc
Dear Microsoft, next time please test it on reddit. It could then say anything
and nobody would doubt it was a real reddit user regardless of the outcome.
The worst case scenario would be that it got it's account deleted.

------
ivoras
When they say "exploited", is it actually some sequence of words which was
interpreted by Tay as learning commands, or was it simply repeating to it
"Hitler is love" a thousand times? Any records of how it learned?

------
tacos
Microsoft seems perpetually five to seven years behind the culture. You can
see it in their ads, their product names, their outreach. Ironically it can
actually be quite lucrative.

I remember talking to people in the music group about MySpace (I was not an
employee). They looked at me funny. Ten minutes later someone finally said
"You keep pronouncing the product wrong. It's called MSN Spaces."

The people working on MSN Spaces -- specifically musician outreach -- hadn't
heard of MySpace. That very week MySpace sold for $580 million. After it sold,
I saw the same guy in another meeting. He STILL hadn't heard of it, nor taken
the time to check it out.

There's a certain stupidity that each of the big tech companies foster. This
particular flavor is Microsoft's and with the chatbot here it rings again.
This one was so obvious... and so preventable.

~~~
chiph
The MySpace sale was in 2005. That's 10 years ago - a virtual eternity.

~~~
tacos
That's my point. Ten years ago 75 people in Red West were so heads down
putting together a MySpace competitor and an online service to promote
musicians that they didn't have time even to try the competing site they were
knocking off, poorly.

Five years ago Microsoft launched a variety of awful code forge sites that
couldn't be more tone deaf to what was going on with code sharing online.

Now we have forty people who are so heads-down on an AI chat-bot that
apparently they have no idea how Twitter works. But hey, sure sounds neat, let
'er rip!

Have these people really not had a chance to use Twitter recently? Are they
completely oblivious to the tone shift there particularly during the past six
months, even more so now during the US political elections?

This was an obvious recipe for disaster that any 22 year old working at
Starbucks could have predicted.

------
home_boi
If an AI bot threatens to hurt someone or defames them, are the creators
legally accountable?

What if an AI bot purchases grass from the silk road or hacks into a database
and discloses privately held information?

This opens up a whole, new legal world.

~~~
DanBC
From 2005.

"MAN AND THE MACHINES: It's time to start thinking about how we might grant
legal rights to computers."

[http://legalaffairs.org/issues/January-
February-2005/feature...](http://legalaffairs.org/issues/January-
February-2005/feature_sokis_janfeb05.msp)

------
3327
Or: We are in a bubble in Redmond and thinking we know best and how products
work we bounced around emails and thought it would be a good testing idea
based on results in China (a controlled market where it is nearly impossible
speak up). AI buzz is hot these days so our marketing team also backed it up
and we decided it would be great from a PR perspective to capture some of the
buzz around AlphaGo. Boy were we so wrong. Because we have never launched a
real product into the wild we thought everything would go well and PR buzz
would give us a coolness bump.

Now we discovered something called stopwords, and bayseian spam filters which,
are also available part of project Oxford.

Good luck kids and welcome to the real world because its a crazy world out
there when you leave Redmond.

~~~
rewrew
Or no one in Microsoft feels free to speak up when they see a bad idea coming
down the road so something like this just sails through instead of getting
flagged. I've seen it -- not this big of scale, but I've seen it.

~~~
mizzao
Or, on the other hand, this actually wasn't obvious and now we just have
thousands of Captain Hindsights saying it was.

------
ta_03252016
We can learn a lot from a trolled chat bot, but it's sad that we turn it off
because it's not politically correct. People knew that they are talking with a
software program and they knew that the bot was manipulated by people with ill
or prankster intentions. However, trying to make a bot politically correct
doesn't solve any problems at all. It is an insult to people that they need to
be protected from slander and demagoguery and they can't tell right from wrong
with their own discretion. It's as if people think that making Donald Trump
quiet would solve all the problems he has brought to our attention.

~~~
magicalist
It's not an emerging AI, it's a chatbot that wasn't doing what they wanted it
to do. You're basically complaining that "not politically correct" was part of
your workflow[1]

[1] [https://xkcd.com/1172/](https://xkcd.com/1172/)

------
philip142au
Why didn't they just parse the output of Tay and map the meaning of the words
against Wordnet and filter for things which had negative meanings?

------
philip142au
If a human said the things Tay said on twitter, the police would be all over
him. Why isn't the police then logically all over Microsoft?

~~~
douche
Why would the police be after you for saying dumb shit on the internet? Unless
you're posting child porn, anyway.

Note, that the US doesn't have any real (stupid) hate speech laws.

------
Cypher
Atleast share the exploit so we can learn how much of an oversight it was
otherwise I'm going to have to go and ask the trolls.

------
JohnLeTigre
The bot worked

We happen to have a society that enjoys trolling people that intend to
"Disneyify" reality.

------
andrewvijay
That was one hell of an intro to an AI bot. MS just added some spice to the
drink.

------
user8341116
Bahahaahhahahahaha. O-our chatbot got taken advantage of! We were completely
blind that this could possibly happen! But they're the worst of humanity, the
people that found an exploit not the engineers who are incapable of
implementing even simple safeguards!!

~~~
chaz72
Sounds easy! So what does your plan to implement "simple safeguards" for a
chatbot on Twitter look like?

~~~
stared
For starter: penalizing generated phrases with obscene words, "hate", "kill",
"Hilter" and a few other.

~~~
Nadya
"I really hate Hitler. If he were alive today, I'd kill him."

Want to give a reason that should be a heavily penalized phrase?

~~~
jackvalentine
It's safer to just avoid Hitler altogether.

------
Artemis2
This is an apology, but Microsoft got _a ton_ of attention in the past few
days from the press. Could the Tay incident be a marketing ploy (that took a
worse turn than expected) to bring the public's attention to Microsoft's work
on AI?

------
mabbo
As one friend said: hey, at least Tay passed the Bechdel Test.

------
jcr
s/Tay/Windows/g

" _As many of you know by now, on Wednesday we launched a chatbot called
Windows. We are deeply sorry for the unintended offensive and hurtful tweets
from Windows, which do not represent who we are or what we stand for, nor how
we designed Windows. Windows is now offline and we 'll look to bring Windows
back only when we are confident we can better anticipate malicious intent that
conflicts with our principles and values._"

Ah, I was wondering why that text looked familiar, broiler plate excuses.

