The Washington PostDemocracy Dies in Darkness

Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM

June 11, 2020 at 2:30 p.m. EDT
Microsoft President Brad Smith speaks with industry executives at the White House in May about reopening the country amid the coronavirus pandemic. (Alex Brandon/AP)

SEATTLE — Microsoft has joined the list of tech giants that have decided to limit the use of its facial-recognition systems, announcing that it will not sell the controversial technology to police departments until there is a federal law regulating it.

The move, which Microsoft President Brad Smith confirmed at a Washington Post Live event Thursday, follows similar decisions by Amazon and IBM as protesters nationwide press for an end to police brutality and racial profiling.

Smith said Microsoft has not sold its facial-recognition technology to police departments. And the company has backed legislation in California that would allow police use of the technology with some restrictions.

Amazon bans police use of its facial-recognition technology for a year

“We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology,” Smith said.

The company plans to put in place “review factors” that Smith said would “go even beyond what we already have” to determine the use of the technology beyond law enforcement.

“The bottom line for us is to protect the human rights of people as this technology is deployed,” Smith said.

Microsoft President Brad Smith says the company will not sell its facial recognition technology to police. (Video: The Washington Post)

The decision by Microsoft comes a little more than two weeks after the killing of George Floyd, an unarmed black man who died after a Minneapolis police officer dug his knee into his neck. Nationwide protests have called for changes in policing.

Tech giants have invested heavily to develop facial-recognition systems as they battle to lead in a key emerging business. Consumers already use the technology to unlock smartphones and tag friends in photos on social media.

Privacy advocates have long raised concerns that police use of facial-recognition could lead to the wrongful arrests of people who bear only a resemblance to a video image. And studies have shown that facial-recognition systems misidentify people of color more often than white people.

Amazon facial-identification software used by police falls short on tests for accuracy and bias, new research finds

On Wednesday, Amazon said it banned police from using its facial-recognition technology for a year to give Congress “enough time to implement appropriate rules.” (Amazon chief executive Jeff Bezos owns The Washington Post.)

A sheriff's office in Oregon is using Amazon Rekognition to help track down criminal suspects. Is this a dangerous precedent against privacy? (Video: Drew Harwell, Jhaan Elker/The Washington Post)

A day earlier, IBM said it will get out of the facial-recognition business altogether over concerns about how the technology can be used for mass surveillance and racial profiling.

After those decisions, critics of police use of the technology increased pressure on Microsoft to follow.

“Microsoft also needs to take a stand,” Joy Buolamwini, an MIT Media Lab researcher, told The Washington Post. Buolamwini co-wrote a study that found that Amazon’s facial-recognition system performed more accurately when assessing lighter-skinned faces.

California could become the largest state to ban facial recognition in body cameras

Both Smith and Amazon in its statement mentioned only a moratorium on police departments using the technology. Neither company said whether the new policies would bar other government agencies, such as the U.S. Immigration and Customs Enforcement, from deploying their facial-recognition technology.

Although Microsoft followed Amazon and IBM, it was the first to call on the U.S. government to regulate facial-recognition technology two years ago. At the time, Microsoft argued that tech giants weren’t likely to regulate themselves.

The American Civil Liberties Union, which has criticized police use of facial-recognition technology, called on federal and state lawmakers to ban police use altogether.

“When even the makers of face recognition refuse to sell this surveillance technology because it is so dangerous, lawmakers can no longer deny the threats to our rights and liberties,” Matt Cagle, a technology and civil liberties lawyer with the ACLU of Northern California, said in a statement. “Congress and legislatures nationwide must swiftly stop law enforcement use of face recognition, and companies like Microsoft should work with the civil rights community — not against it — to make that happen.”

And he said that any legislation should be vetted by the communities most affected.

“No company-backed bill should be taken seriously unless the communities most impacted say it is the right solution,” Cagle said.

Like its peers, Microsoft has struggled to balance its relationship with the Defense Department against its employees’ ethical and policy qualms about working with the U.S. government. In the wake of employee complaints in 2018 about Microsoft bidding on a giant cloud-computing contract with the agency, Smith made clear that the company would continue to work with the military while looking for ways to ensure its technology is used responsibly.

“We want the people of this country and especially the people who serve this country to know that we at Microsoft have their back,” Smith wrote in a blog post at the time.