This occasionally gets brought up as an example in the context of problems with AI, but is it at all relevant?
Per Gwern[0]:
> (...) There appear to be several similar AI-related leprechauns: the infamous Tay bot, which was supposedly educated by 4chan into being evil, appears to have been mostly a simple ‘echo’ function (common in chatbots or IRC bots) and the non-“repeat after me” Tay texts are generally short, generic, and cherrypicked out of tens or hundreds of thousands of responses, and it’s highly unclear if Tay ‘learned’ anything at all in the short time that it was operational;
Because without it, they'll end up with inevitable outraged news posts "New AI coding tool Copilot generates offensive text" killing the project for no good reason.
Because there is a certain vocal demographic of people who think words can physically hurt people, so Microsoft needs to go to extreme lengths to avoid that demographic of people from thinking that people are getting physically hurt by using those words.