The Washington PostDemocracy Dies in Darkness

Why Microsoft’s racist Twitter bot should make us fear human nature, not A.I.

March 24, 2016 at 5:49 p.m. EDT
A T-800 Terminator in a scene from "Terminator Salvation," a Warner Bros. Pictures release.

Let me put it plainly. Despite what you may hear, Microsoft's racist, Hitler-loving A.I. is not how the robot uprising begins.

You might have seen some reports by now about Tay, a bot designed to sound like a teenager on the Internet and to learn from her interactions with other people. She knew how to use slang, deploy emoji and crack jokes. The goal was for Tay to become smarter, more conversant and a better interlocutor over time.

What we got was something very different. To many people's horror, Tay soon became a Holocaust denier, a genocide supporter, and a vocal racist lashing out at minority groups of every kind.

Microsoft responded by taking Tay offline for "adjustments," saying the company "became aware of a coordinated effort by some users to abuse Tay's commenting skills."

"Some of its responses are inappropriate," the company also said.

In a second we will get to how this abuse occurred, but before we do, let us take a look at how people have been responding to the alarming statements Tay made.

"It seems like the Terminator scenario might actually happen," one reddit user wrote.

The fact that Tay spiraled out of control speaks to a deep-seated anxiety we've always had about our creations getting the better of us. And it's a completely natural feeling. But Tay is not an accurate preview of our A.I.-enabled future, and here's why: The whole point of the exercise was to reflect back at humanity what the rest of the Internet fed it.

Tay was deliberately programmed to parrot, for instance, what other Twitter users said it should. By saying "repeat after me," trolls were able to manipulate Tay into saying some pretty horrible things. And then it used what it learned in other responses, creating a depressing feedback loop of digital hate and bile.

This is a glaring design flaw that Microsoft evidently didn't anticipate, which raises questions about how the company failed to see this coming in the first place. Of course people are going to try to break your new toy, especially if it's "supposed" to show us a better side of humanity. It's why we can't have nice things. And it shows how careful engineers should be in designing robots of the future.

Tay was a social experiment that went badly awry. But that doesn't amount to an indictment of A.I. None of the most important applications for A.I. that stand to reshape society work the way Tay does.

Take autonomous drones or driverless cars, for instance. They are designed for very specific uses, operate under stringent security protocols and face a huge amount of cultural and legal pressure to make safety the first priority. To the extent that designers are allowing these machines to make their own calls, it's within a very specific set of conditions — "Is there a pedestrian there or no?" — and a bounded set of rules.

By contrast, Tay was essentially unleashed on the world with a barely formed intellect. Its express purpose was to be shaped and molded by its environment. You can already start to see how divergent Tay's A.I. is from the ones powering the devices that will be ferrying us around and delivering your packages. Google doesn't seem at all ready to let 4chan or reddit teach its cars to drive.

In a recent interview, U.S. Chief Technology Officer Megan Smith introduced me to something called the Cluetrain Manifesto, a series of principles about the Internet. One of the more recent ideas associated with its authors is: The Internet is us.

"It's just us, so people bring the harassment that they have, they bring that bullying, they bring all of that that exists in analog" to the digital world, Smith told me.

Tay seems like the perfect distillation of that idea. But don't expect her to be your chauffeur anytime soon.