The Washington PostDemocracy Dies in Darkness

Why the Islamic State leaves tech companies torn between free speech and security

July 16, 2015 at 8:12 a.m. EDT
The Islamic State and its supporters use social media to post propaganda and recruit followers. The Washington Post takes a closer look at how several groups in the U.S. monitor this activity. (Video: Gillian Brockell and Jorge Ribas/The Washington Post)

When a lone terrorist slaughtered 38 tourists at a Tunisian resort on June 26, the Islamic State turned to one of America’s leading social-media companies to claim responsibility and warn of more attacks on the world’s nonbelievers.

"It was a painful strike and a message stained with blood," the Islamic State announced on Twitter following the massacre in Sousse , a popular destination for Europeans on the Mediterranean. "Let them wait for the glad tidings of what will harm them in the coming days, Allah permitting."

Part of the series
Confronting the ‘Caliphate’
See more stories

Three days before the assault, the Islamic State relied on another popular U.S. social-media platform, Google’s YouTube, to promote a grisly propaganda video of three separate mass killings. Men accused of cooperating with U.S.-coordinated airstrikes in Iraq and Syria are seen being incinerated in a car, drowned in a cage lowered into a swimming pool and decapitated by explosive necklaces looped around their necks.

Versions of it would remain on YouTube, even as company executives proclaimed during an international advertising festival that week in Cannes, France, that Google would not provide a “distribution channel for this horrible, but very newsworthy, terrorist propaganda.”

As the Islamic State, also known as ISIS and ISIL, continues to hold large parts of Iraq and Syria and inspire terrorist attacks in more and more countries, it has come to rely upon U.S. social-media companies to summon fresh recruits to its cause, spread its propaganda and call for attacks, according to counterterrorism analysts. What is the Islamic State?

"We also have to acknowledge that ISIL has been particularly effective at reaching out to and recruiting vulnerable people around the world, including here in the United States ," President Obama said July 6 at the Pentagon. "So the United States will continue to do our part, by working with partners to counter ISIL's hateful propaganda, especially online."

The social-media savvy of the militant group is raising difficult questions for many U.S. firms: how to preserve global platforms that offer forums for expression while preventing groups such as the Islamic State from exploiting those free-speech principles to advance their terrorist campaign.

“ISIS has been confronting us with these really inhumane and atrocious images, and there are some people who believe if you type ‘jihad’ or ‘ISIS’ on YouTube, you should get no results,” Victoria Grand, Google’s director of policy strategy, told The Washington Post in a recent interview. “We don’t believe that should be the case. Actually, a lot of the results you see on YouTube are educational about the origins of the group, educating people about the dangers and violence. But the goal here is how do you strike a balance between enabling people to discuss and access information about ISIS, but also not become the distribution channel for their propaganda?”

In a propaganda war against the Islamic State, the U.S. tried to play by the enemy’s rules

Some lawmakers and government officials say the companies are not going far enough.

“They are being exploited by terrorists,” Assistant Attorney General for National Security John P. Carlin said in a recent interview. “I think there is recognition now that there is a problem, and so we’re starting to see people at the companies address additional resources. But more needs to be done because we’re still seeing the threat, and the threat is increasing, not decreasing.

“It’s not a problem just here in the United States. I think they’re hearing it from governments and customers from throughout the world.”

A field analysis in May by the Department of Homeland Security warns that the Islamic State's use of social media is broadening the terrorist group's reach.

“ISIL leverages social media to propagate its message and benefits from thousands of organized supporters globally online, primarily on Twitter, who seek to legitimize its actions while burnishing an image of strength and power,” according to the analysis. “The influence is underscored by the large number of reports stemming from social media postings.”

In Europe, some governments are requiring social-media companies to block or remove terror-related posts.

Earlier this month, the Senate Intelligence Committee approved a bill that would require social-media companies to alert federal authorities when they become aware of terrorist-related content on their sites. The bill is designed to provide law enforcement agencies with information about potential terror plots. It would not require firms to monitor any users or their communications.

Putting more pressure on the social-media companies, a U.N. panel last month called on the firms to respond to accusations that their sites are being exploited by the Islamic State and other groups.

In the United States, government regulation of speech, regardless of how offensive or hateful, is generally held to be unconstitutional under the First Amendment. The social-media companies — each with its own culture, mission and philosophy — have been governing how and when to block or remove terror-related content.

Explore themes in this story

Islamic State recruiting

Atrocities

Foreign fighters

Anti-Islamic State activism

The revelations of former National Security Agency contractor Edward Snowden about U.S. government surveillance have also made the tech companies wary of cooperating with Washington.

Facebook has been the most aggressive of the large social-media companies when it comes to taking down terror-­related content. The company has adopted a zero tolerance policy and, unlike other social-media companies, proactively removes posts related to terrorist organizations. Facebook also relies on its users to alert the company to posts that promote or celebrate terrorism and hires screeners to review content that might violate its standards.

“We don’t allow praise or support of terror groups or terror acts, anything that’s done by these groups and their members,” said Monika Bickert, a former federal prosecutor who heads global policy management for Facebook.

One year ago, Islamic State stepped into the global spotlight. Here’s what has happened since.

Of all the large social-media companies, Twitter has been the most outspoken about protecting freedom of speech on its platform. Still, the company recently updated its abuse policy, stating that users may not threaten or promote terrorism.

“Twitter continues to strongly support freedom of expression and diverse perspectives,” according to a statement by a Twitter official, who spoke on the condition of anonymity because of recent death threats against employees by Islamic State supporters. “But it also has clear rules governing what is permissible. . . . The use of Twitter by violent extremist groups to threaten horrific acts of depravity and violence is of grave concern and against our policies, period.”

Another challenge for the companies: It is often difficult to distinguish between communiques from terrorist groups and posts by news organizations and legitimate users. Internet freedom advocates also note that much of what groups such as the Islamic State are posting can be seen as part of the historical record — even though many of the photographs and videos are horrific.

They point to the memorable 1968 Associated Press photograph of South Vietnam's national police commander shooting a suspected Viet Cong fighter in the head on a Saigon street. They wonder how that Pulitzer Prize-winning image, which came to symbolize the chaos and brutality of the Vietnam War, would be handled in the age of social media and modern digital warfare.

“You want to live in a world where people have access to news — in other words, documentary evidence of what is actually happening,” said Andrew McLaughlin, a former Google executive and chief U.S. technology officer who now is a partner in the tech and media start-up firm Betaworks in New York. “And an ISIS video of hostages being beheaded is both an act of propaganda and is itself a fact. And so if you’re a platform, you don’t want to suppress the facts. On the other hand, you don’t want to participate in advancing propaganda.

“And there is the conundrum.”

‘Pure evil’

Before the rise of social media, many of the three dozen video and audio messages Osama bin Laden issued before his death were recorded in remote locations, smuggled out by couriers, and aired on what was then a largely unknown television station based in Qatar called Al Jazeera. Weeks could pass between the time when bin Laden spoke and when he was heard.

Al-Qaeda operatives communicated through password-protected forums and message boards on the Internet. Access was tightly controlled.

"It was a different time," said Steven Stalinsky, executive director of the Middle East Media Research Institute, which tracks online communications of terrorist organizations. "The jihadi groups decided what could be posted and released. Twitter became the way around the forums. It became the Wild West of jihad." The propaganda wars since 9/11

Before his death, bin Laden had come to recognize the revolution that followed the launch of Facebook in 2004 and Twitter in 2006.

“The wide-scale spread of jihadist ideology, especially on the Internet, and the tremendous number of young people who frequent the Jihadist Web sites [are] a major achievement for jihad,” bin Laden wrote in a May 2010 letter that was later found by U.S. Special Operations forces inside his Pakistan compound.

Al-Shabab, a militant group in Somalia allied with al-Qaeda, became one of the first terrorist organizations to use Twitter for both propaganda and command and control during an attack, according to terrorism analysts. The group set up Twitter accounts under al-Shabab's media wing, called HMS Press. Is al-Qaeda still relevant?

In September 2013, al-Shabab attracted worldwide attention when it live-tweeted a terror attack it carried out at the upscale Westgate shopping mall in Nairobi .

“What Kenyans are witnessing at #Westgate is retributive justice for crimes committed by their military, albeit minuscule in nature,” HMS Press tweeted. A short time later, the group posted another tweet: “Since our last contact, the Mujahideen inside the mall confirmed to @HMS_Press that they killed over 100 Kenyan kuffar & battle is ongoing.”

In the end, more than 60 people were killed and an additional 175 wounded. Twitter took down those accounts that day, marking one of the first times the company removed material posted by a terrorist organization. But al-Shabab quickly created new Twitter accounts under different names — illustrating both the utility of the platform and the difficulty of policing it.

The attack and how it played out in real time inspired terrorists around the world.

"We must make every effort to reach out to Muslims both through new media like Facebook and Twitter," Adam Gadahn, an American-born al-Qaeda propagandist, proclaimed in a 2013 interview. (In January, he was killed in a U.S. strike.)

The Islamic State has gone on to make Twitter one of its most important tools .

FBI Director James B. Comey testified to Congress this month about how the Islamic State is reaching out through Twitter to about 21,000 English-language followers. The group’s message, he said, is, “Come to the so-called caliphate and live the life of some sort of glory or something; and if you can’t come, kill somebody where you are; kill somebody in uniform; kill anybody; if you can cut their head off, great; videotape it; do it, do it, do it.” He described it as “a devil on their shoulder all day long, saying: Kill, kill, kill, kill.”

Comey also said that Twitter has become “particularly aggressive at shutting down and trying to stop ISIL-related sites. I think it led ISIL to threaten to kill their CEO, which helped them understand the problem in a better way.”

Others are not convinced.

“Twitter is providing a communication device, a loudspeaker for ISIS,” said Mark Wallace, a former U.S. ambassador who now runs the Counter Extremism Project, a nonprofit group that tracks terrorists and attempts to disrupt their online activities. “If you are promoting violence and a call to violence, you are providing material support. Twitter should be part of the solution. If not, they are part of the problem.”

At a Constitution Project dinner in April honoring Twitter for its leadership on First Amendment issues, Colin Crowell, the firm’s head of global public policy, acknowledged that Twitter has hosted “painful content” and content reflecting “terrorism, government repression” on its site. But, he said, “it is also a place where people can find . . . information, conversation and where empathy can be shared.”

The “key thing,” he said, “for us at Twitter is to recognize our role as the provider of this open platform for free expression . . . to recognize that that speech is not our own.”

It is “precisely because it’s not our own content that we feel we have a duty to respect and to defend those voices on the platform,” Crowell said. “The platform of any debate is neutral. The platform doesn’t take sides.”

In August 2014, the Islamic State uploaded a video on YouTube and other sites showing the beheading of American journalist James Foley .

A succession of other videotaped beheadings of Americans and Britons followed — Steven Sotloff , Peter Kassig , David Haines , Alan Henning — as well as the immolation of the Jordanian pilot Muath al-Kaseasbeh and the mass killings of Syrians, Kurds and Coptic Christians, among others.

Each slaying became a carefully orchestrated and slickly produced event.

“Pure evil,” President Obama called Kassig’s beheading.

For Facebook, the killings marked a turning point. The company made it easier for its 1.4 billion users — the largest in the world — to report content from suspected terrorist groups, and it began to aggressively remove their posts. The company also deployed teams of people around the world to review content that had been flagged as terrorist-related to determine whether the posts were in fact from terrorist groups in violation of Facebook's terms of service.

Facebook has banned terror-related content from its pages for more than five years. In March, the company updated its community standards, explicitly prohibiting posts that praise or celebrate terrorist organizations and their leaders.

Bickert, Facebook’s policy chief, said posts flagged by users are examined by “operations teams” of content reviewers stationed in four offices around the world.

“We want to make sure we’re keeping our community safe, and we’re not a tool for propaganda,” Bickert said. “On the other hand, we can see that people are . . . talking about ISIS and are concerned about ISIS, in part, because they’ve seen this imagery and it makes it very real to people. So none of these issues are easy.”

‘Good luck’

France's interior minister was still reeling from the Jan. 7 terror attack on the Paris offices of the satirical newspaper Charlie Hebdo when he attended a White House counterterrorism summit in February.

The Islamic State and al-Qaeda had turned the Paris attack that left 12 dead into a propaganda coup. The groups boasted about the killings on social media, transmitting images that included the fatal shooting of a police officer as he lay wounded on a sidewalk, raising his arm in surrender.

While in Washington, the French official, Bernard Cazeneuve, had lunch with then-U.S. Attorney General Eric H. Holder Jr. Cazeneuve told Holder that he was planning to meet with executives of social-media companies in Silicon Valley the following day, hoping to persuade them to stop terrorists from using their sites for propaganda, recruitment and operational planning.

According to a French official, Cazeneuve asked Holder whether he had any advice before he left for California.

“Good luck,” the attorney general said.

Cazeneuve arrived in California on Feb. 20, where he met with executives of several social-media companies, including Facebook, Twitter and YouTube.

“We needed to have the help of the companies,” said a French official who spoke on the condition of anonymity because he was not authorized to discuss the trip on the record. “How could we work [together] much faster and quicker?”

The official said the meeting with Facebook went well. The company’s vice president vowed that Facebook would continue to take down terror-related content from the site.

At Google, the French officials met with public policy and legal executives, who said they had been removing terror-related posts and would continue to do so; YouTube users flag about 100,000 posts each day that are suspected of being in violation of the company’s terms of service.

Google officials also noted that the airing of the Charlie Hebdo video on YouTube was the subject of intense debate inside the company. In the end, company officials decided to leave the video up, on the grounds that it was newsworthy and had become part of the historical record. The video has since been deleted from YouTube’s channel in France at the request of French officials.

The French minister’s meeting with Twitter did not go well.

“It was our most difficult meeting,” the French official said. “The minister showed pictures of the Paris attack that were sent out on Twitter, including the execution of the police officer,” he recalled. “He was very graphic in his explanation. They had a lengthy explanation that it was not easy. We argued that child pornography is being taken down. They said their algorithms were not as easy to set up to find jihadi information. You need a bunch of people to review the material.”

The meeting ended “with no specific commitments” from Twitter, the official said.

The Twitter official said the firm does not comment on private meetings with government officials. “We have a strong working relationship with French law enforcement that predates the Charlie Hebdo attack,” he said.

In April, an Islamic State supporter in Somalia called for a Charlie Hebdo-style attack in the United States. The post inspired two men to try to attack a Garland, Tex., event where cartoonists were drawing the prophet Muhammad , according to Rita Katz, executive director of the SITE Intelligence Group, which tracks terrorists' online communications.

The men were gunned down by security teams before they could open fire, but Katz said the attack could have ended very differently.

“Once you start using Twitter, you start to understand how powerful it is, and that is why ISIS is taking advantage of it,” Katz said. “Twitter must understand that they have to be responsible for the kind of information that they disseminate.”

No quick fixes

Confronting the Islamic State online and removing its material is a constant challenge, computer scientists say. Lawmakers, government officials and terrorism experts frequently cite social-media companies’ efforts to rid their sites of child pornography. If they can remove that content, why can’t they screen out tweets and posts from terror groups?

From a computer science standpoint, solving the child pornography problem was relatively straightforward. The National Center for Missing & Exploited Children maintains a database of thousands of photographs of child pornography, images that are frequently downloaded by pedophiles and traded over the Internet. Using software called Microsoft PhotoDNA, images are scanned and identified using unique digital markers.

Every time a new image is uploaded onto a site, a company can run it against the database, which compares the digital markers. Anything that matches is deleted and, by federal law, reported to the national center and then to law enforcement agencies.

Many social-media companies, including Twitter and Facebook, rely on the software, which can recognize images in still photos, but not videos.

Flagging terror-related content is more complex — but not impossible, computer scientists say.

Hany Farid, a Dartmouth computer science professor who co-developed Microsoft PhotoDNA, said the software is licensed to the national center solely to identify images of child pornography. But he said the software could be used to flag terror-related propaganda. For example, the software could identify a photograph of Foley, the American journalist, allowing companies to catch images of his beheading before they appear on their sites.

“The technology is extremely powerful, but it’s also limited,” Farid said. “You can only find images that you’ve already found before.”

Social-media companies also could download images of the Islamic State's black flag, an image frequently displayed on the group's propaganda posts and communiques, and create "hash values," or digital fingerprints, of the images to search for them online, computer scientists say.

Could Britain ban the Islamic State flag?

But while social-media companies could use such techniques to detect every post with an Islamic State flag, not all of those posts would necessarily have come from the terrorist group. A journalist could have tweeted out a link containing material from the Islamic State, or a government agency or think tank could have issued a report about the group that contains an image of the flag.

The sheer volume of the content on social-media sites also poses a challenge, computer scientists said. Twitter has 302 million active users who send out 500 million tweets a day. YouTube has more than 1 billion users. Every minute of every day, they upload more than 300 hours of video.

“There is a long history of government asking technology companies to do things they can’t do. They say America has put a man on the moon. Why can’t the companies do this?” said Christopher Soghoian, principal technologist and senior policy analyst for the American Civil Liberties Union. “People treat computers like magic boxes. There is no silver bullet here. Companies are going to be reluctant to roll out technology that is going to have a high rate of false positives.”

Whac-a-Mole

As the more established social-media companies become more aggressive in monitoring and removing terror-related content, groups such as the Islamic State are also migrating to lesser-known sites, where they can share their messages and videos. The sites include Instagram, Tumblr and Soundcloud, according to terror experts. Islamic State's English-language radio

One of the sites, the nonprofit Internet Archive in San Francisco, has been around for nearly 20 years.

The archive was founded in 1996 to provide the public with free access to millions of documents and videos and clips and Web pages — almost anything that has been on the Web. It is probably best known for its Wayback Machine. So far, it has captured and stored nearly 150 billion Web pages.

Map: What a year of Islamic State terror looks like

In the past year, the Islamic State has created several accounts on the archive and has been using the site to host video and audio productions, online magazines and radio broadcasts, according to terrorism experts.

Internet Archive’s office manager, Chris Butler, told The Post that his organization is removing videos of beheadings and executions whenever it becomes aware of them, either during routine maintenance of the site or after outside complaints.

But unlike sites such as Facebook and Twitter, the archive does not have a flagging mechanism. Butler said the group is working on a system that will enable users to help identify and report problematic content.

“We do our best with a very small team and no lawyers on staff, and have nowhere near the budget of larger commercial sites handling similar quantities of content to us, like YouTube, Twitter and Facebook,” Butler said.

Twitter has recently stepped up efforts to remove terrorist accounts. In April, it took down 10,000 accounts over two days. That has led security researchers such as Daniel Cuthbert to lament the loss of what he saw as a valuable source of intelligence.

The Islamic State was backed by 46,000 accounts on Twitter in 2014.

Cuthbert, chief operating officer of Sensepost, a cybersecurity firm, supports removal of videos of beheadings and other content that “glorifies ISIS.” But he said he has lost a window into conversations between Islamic State members, supporters and potential recruits.

“I no longer have the ability to see who the key people are in ISIS when it comes to a social-media campaign, and how they’re tweeting, who they’re tweeting to, and how many are British nationals who may be getting groomed,” said Cuthbert, who is based in London.

After Twitter conducted the mass takedown, Cuthbert requested access to Twitter's "firehose" — its entire stream of tweets. But a Twitter employee denied his request, citing concerns that he was sharing the material with law enforcement.

“We have certain sensitivities with use cases that look at individuals in an investigative manner, especially when insights from that investigation are directly delivered to law enforcement or government agencies to be acted upon,” the employee said in an e-mail to Cuthbert, which he shared with The Post.

The FBI’s Comey told reporters “there’s actually a discussion within the counterterrorism community” as to whether it is better to shut the accounts down or keep them up so they can be tracked for intelligence purposes. “I can see the pros and cons on both sides. But it’s an issue that’s live,” he said.

Counterterrorism officials say the constantly evolving social-media landscape is providing more places for groups such as the Islamic State to hide in cyberspace. Finding and shutting down sites and accounts is starting to resemble a carnival game of Whac-a-Mole, they say. As soon as one site or account is taken down, another pops up. As soon as one platform starts aggressively monitoring terrorist content, militants migrate to another.

Worse, investigators and terrorism analysts fear that the Islamic State and other terrorist groups are moving beyond public-facing social-media platforms for recruitment, increasingly relying on encrypted sites where their communications can continue largely undetected.

Comey recently said he is concerned that the Islamic State will use Twitter or another popular social-media platform to make contact with followers before “steering them off of Twitter to an encrypted form of communication.’’

John D. Cohen served as a former top intelligence official at the Department of Homeland Security. He said counterintelligence officials have traditionally searched for the proverbial needle in a haystack when trying to identify terrorists and their plots. The explosion of social-media sites, he said, has complicated the search beyond compare.

“The haystack is the entire country now,” Cohen said. “Anywhere there’s a troubled soul on the Internet and a potential Twitter follower, that haystack extends. We’re looking for needles. But here’s the hard part: Increasingly, the needles are invisible to us.”

CONFRONTING THE 'CALIPHATE' | This is part of an occasional series about the rise of the Islamic State militant group, its implications for the Middle East, and efforts by the U.S. government and others to undermine it.

Read more in the series:

In a propaganda war against ISIS, the U.S. tried to play by the enemy's rules

From hip-hop to jihad, how the Islamic State became a magnet for converts

.The hidden hand behind the Islamic State militants? Saddam Hussein's

'Jihadi John': Islamic State killer is identified as Londoner Mohammed Emwazi

Fauzeya Rahman, a fellow at the Investigative Reporting Workshop at American University, contributed to this report.