
Ask HN: Did Tay, Microsoft AI, give you a sneak peek of how dangerous AI can be? - adarsh_thampy
AI is great. But how much is too much? What happens when they can learn on their own and come to logical conclusions it thinks is right?<p>Reference: http:&#x2F;&#x2F;arstechnica.com&#x2F;information-technology&#x2F;2016&#x2F;03&#x2F;tay-the-neo-nazi-millennial-chatbot-gets-autopsied&#x2F;
======
iaml
Half of the responses from Tay were from twitter history or "repeat after me".
It was a cute little experiment but not really a display of how real AI will
behave. Check out this article[1], I think more people should read it before
jumping to conclusions.

[1]
[http://smerity.com/articles/2016/tayandyou.html](http://smerity.com/articles/2016/tayandyou.html)

------
smt88
Tay did not come to logical (or racist) conclusions. It was taught to be anti-
social. Humans had to make it that way.

Much like weaponized diseases, AI will just be a _very_ powerful tool that
humans can misuse. Hopefully, like nuclear weapons (incredible power, but
highly exclusive), AI will be incredibly difficult for the average person to
use in a malicious way.

