Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
golfer
on March 24, 2016
|
parent
|
context
|
favorite
| on:
Microsoft chatbot is taught to swear on Twitter
Google's AI beats go champions. Microsoft's AI turns into a racist genocidal maniac.
sremani
on March 24, 2016
|
next
[–]
Tay is more a reflection of the interwebs of today, than the culture or values of Microsoft. I think we should be cautious about our conclusions.
golfer
on March 24, 2016
|
parent
|
next
[–]
Perhaps, but it is naive of the Microsoft researchers to think this wasn't a possibility. They should have seen this coming and prepared accordingly.
Crito
on March 25, 2016
|
root
|
parent
|
next
[–]
At worst, Microsoft researchers are guilty of
not
being familiar with internet racism. Hardly a great sin.
siegecraft
on March 25, 2016
|
root
|
parent
|
next
[–]
They didn't sanitize their input data.. that's the worst sin you can commit.
Crito
on March 25, 2016
|
root
|
parent
|
next
[–]
That's pretty fucking hyperbolic. A technical "sin" perhaps, but people are heaping derision on them as though they committed some great
moral
sin.
siegecraft
on March 25, 2016
|
root
|
parent
|
next
[–]
Hrm, I assumed it would be obvious I meant a technical sin
794CD01
on March 24, 2016
|
prev
|
next
[–]
Google's AI was designed to play go, and became an expert at Go. Microsoft's AI was designed to use twitter, and became an expert at twitter.
umeshunni
on March 24, 2016
|
prev
|
next
[–]
Add this to the manual on how not to do a PR stunt
Kristine1975
on March 24, 2016
|
parent
|
next
[–]
Everybody's talking about it, aren't they? "No such thing as bad PR" an all that...
Grishnakh
on March 24, 2016
|
prev
[–]
The next time someone talks about purchasing or using Microsoft software, you can point to this and ask them if they want to support a company that
literally
agrees with Hitler, and has said so publicly!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: