The Washington PostDemocracy Dies in Darkness

Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac

March 25, 2016 at 6:01 p.m. EDT

It took mere hours for the Internet to transform Tay, the teenage AI bot who wants to chat with and learn from millennials, into Tay, the racist and genocidal AI bot who liked to reference Hitler. And now Tay is taking a break.

Tay, as The Intersect explained in an earlier, more innocent time, is a project of Microsoft’s Technology and Research and its Bing teams. Tay was designed to “experiment with and conduct research on conversational understanding.” She speaks in text, meme and emoji on a couple of different platforms, including Kik, Groupme and Twitter. Although Microsoft was light on specifics, the idea was that Tay would learn from her conversations over time. She would become an even better, fun, conversation-loving bot after having a bunch of fun, very not-racist conversations with the Internet’s upstanding citizens.

Meet Tay, the creepy-realistic robot who talks just like a teen

Except Tay learned a lot more, thanks in part to the trolls at 4chan’s /pol/ board.

Peter Lee, the vice president of Microsoft research, said on Friday that the company was “deeply sorry” for the “unintended offensive and hurtful tweets from Tay.”

In a blog post addressing the matter, Lee promised not to bring the bot back online until “we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Lee explained that Microsoft was hoping that Tay would replicate the success of XiaoIce, a Microsoft chatbot that’s already live in China. “Unfortunately, within the first 24 hours of coming online,” an emailed statement from a Microsoft representative said, “a coordinated attack by a subset of people exploited a vulnerability in Tay.”

Not just Tay: A recent history of the Internet’s racist bots

Microsoft spent hours deleting Tay’s worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people. Many of the really bad responses, as Business Insider notes, appear to be the result of an exploitation of Tay’s “repeat after me” function — and it appears that Tay was able to repeat pretty much anything.

“We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience,” Lee said in his blog post. He called the “vulnerability” that caused Tay to say what she did the result of a “critical oversight,” but did not specify what, exactly, it was that Microsoft overlooked.

Not all of Tay’s terrible responses were the result of the bot repeating anything on command. This one was deleted Thursday morning, while the Intersect was in the process of writing this post:

In response to a question on Twitter about whether Ricky Gervais is an atheist (the correct answer is “yes”), Tay told someone that “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” the tweet was spotted by several news outlets, including the Guardian, before it was deleted.

All of those efforts to get Tay to say certain things seemed to, at times, confuse the bot. In another conversation, Tay tweeted two completely different opinions about Caitlyn Jenner:

It appears that the team behind Tay — which includes an editorial staff — started taking some steps to bring Tay back to what it originally intended her to be, before she took a break from Twitter.

The dark side of going viral that no one talks about

For instance, after a sustained effort by some to teach Tay that supporting the Gamergate controversy is a good thing:

Tay started sending one of a couple of almost identical replies in response to questions about it:

Zoe Quinn, a frequent target of Gamergate, posted a screenshot overnight of the bot tweeting an insult at her, prompted by another user. “Wow it only took them hours to ruin this bot for me,” she wrote in a series of tweets about Tay. “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.”

Towards the end of her short excursion on Twitter, Tay started to sound more than a little frustrated by the whole thing:

Microsoft’s Lee, for his part, concluded his blog post with a few of the lessons his team has learned.

“AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes…We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

This post, originally published at 10:08 am on March 24th, has been updated multiple times. 

Liked that? Try these: