The Washington PostDemocracy Dies in Darkness

The U.S. says humans will always be in control of AI weapons. But the age of autonomous war is already here.

The Pentagon says a ban on AI weapons isn’t necessary. But missiles, guns and drones that think for themselves are already killing people in combat, and have been for years.

July 7, 2021 at 10:00 a.m. EDT
(Jean-Francois Podevin for The Washington Post)
8 min

Picture a desert battlefield, scarred by years of warfare. A retreating army scrambles to escape as its enemy advances. Dozens of small drones, indistinguishable from the quadcopters used by hobbyists and filmmakers, come buzzing down from the sky, using cameras to scan the terrain and onboard computers to decide on their own what looks like a target. Suddenly they begin divebombing trucks and individual soldiers, exploding on contact and causing even more panic and confusion.

This isn’t a science fiction imagining of what future wars might be like. It’s a real scene that played out last spring as soldiers loyal to the Libyan strongman Khalifa Hifter retreated from the Turkish-backed forces of the United Nations-recognized Libyan government. According to a U.N. group of weapons and legal experts appointed to document the conflict, drones that can operate without human control “hunted down” Hifter’s soldiers as they fled.

The U.S., Russia and China say a ban on AI weapons is unnecessary. But growing number of activists and international allies are pushing for restrictions. (Video: Jonathan Baran/The Washington Post)

Drones have been a key part of warfare for years, but they’ve generally been remotely controlled by humans. Now, by cobbling together readily available image-recognition and autopilot software, autonomous drones can be mass-produced on the cheap.

Today, efforts to enact a total ban on lethal autonomous weapons, long demanded by human rights activists, are now being supported by 30 countries. But the world’s leading military powers insist that isn’t necessary. The U.S. military says concerns are overblown, and humans can effectively control autonomous weapons, while Russia’s government says true AI weapons can’t be banned because they don’t exist yet.

But the facts on the ground show that technological advancements, coupled with complex conflicts like the Syrian and Libyan civil wars, have created a reality where weapons that make their own decisions are already killing people.

“The debate is very much still oriented towards the future,” said Ingvild Bode, an autonomous weapons researcher at the University of Southern Denmark. “We should take a much closer look at what is already going on.”

Meet the scientist teaching AI to police human speech

Libya wasn’t the only place drones that can kill autonomously were used last year. Turkey has used the same quadcopters to patrol its border with Syria. When Azerbaijan invaded Armenian-occupied territory in September, it sent in both Turkish- and Israeli-made “loitering munitions” — drones that can autonomously patrol an area and automatically divebomb enemy radar signals. These weapons look like smaller versions of the remote-controlled drones that have been used extensively by the U.S. military in Iraq, Afghanistan and other conflicts. Instead of launching missiles through remote control, though, loitering munitions have a built-in explosive and destroy themselves on impact with their target.

Since they have both remote-control and autonomous capability, it’s impossible to know from the outside whether humans made the final call to bomb individual targets. Either way, the drones devastated Armenia’s army, and the war ended two months later with Azerbaijan gaining huge swaths of territory.

These kinds of weapons are moving firmly into the mainstream. Today, there are dozens of projects by multiple governments to develop loitering munitions. Even as countries like the United States, China and Russia participate in discussions about a treaty limiting autonomous weapons, they’re racing ahead to develop them.

“The advanced militaries are pushing the envelope of these technologies,” said Peter Asaro, a professor at the New School in New York and a co-founder of the International Committee for Robot Arms Control, which advocates for stricter rules around lethal autonomous weapons. “They will proliferate rapidly.”

Can Computer Algorithms Learn to Fight Wars Ethically?

Over the past decade, cheaper access to computers that can crunch massive data sets in a short time has allowed researchers to make huge breakthroughs in designing computer programs that pull insights from large amounts of information. AI advances have led to machines that can write poetry, accurately translate languages and potentially help scientists develop new medicines.

But debates about the dangers of relying more on computers to make decisions are raging. AI algorithms are only as good as the data sets they were trained on, and studies have shown facial recognition AI programs are better at identifying White faces than Black and Brown ones. European lawmakers recently proposed strict new rules regulating the use of AI.

Companies including Google, Amazon, Apple and Tesla have poured billions of dollars into developing the technology, and critics say AI programs are sometimes being deployed without full knowledge of how they work and what the consequences of widespread use could be.

Some countries, such as Austria, have joined the call for a global ban on autonomous weapons, but U.S. tech and political leaders are pushing back.

In March, a panel of tech luminaries including former Google chief executive Eric Schmidt, then-chief of Web services, now chief executive of Amazon Andy Jassy and Microsoft chief scientist Eric Horvitz released a study on the impact of AI on national security. The 756-page final report, commissioned by Congress, argued that Washington should oppose a ban on autonomous weapons because it would be difficult to enforce, and could stop the United States from using weapons it already has in its arsenal.

In a first, Air Force uses AI on military jet

“It may be impossible to define the category of systems to be restricted in such a way that provides adequate clarity while not overly constraining existing U.S. military capabilities,” the report said.

In some places, AI tech like facial recognition has already been deployed in weapons that can operate without human control. As early as 2010, the arms division of South Korean tech giant Samsung built autonomous sentry guns that use image recognition to spot humans and fire at them. Similar sentry guns have been deployed by Israel on its border with the Gaza Strip. Both governments say the weapons are controlled by humans, though the systems are capable of operating on their own.

But even before the development of facial recognition and super-fast computers, militaries have turned to automation to gain an edge. During the Cold War, both sides developed missile defense systems that could detect an enemy attack and fire automatically.

The use of these weapons has already had deadly effects.

In March 2003, just days after the invasion of Iraq by the United States and its allies began, British air force pilot Derek Watson was screaming over the desert in his Tornado fighter jet. Watson, a squadron commander, was returning to Kuwait in the dead of night after bombing targets in Baghdad. Another jet, crewed by Kevin Main and Dave Williams, followed behind.

Elon Musk calls for ban on killer robots before ‘weapons of terror’ are unleashed

Twenty thousand feet below, a U.S. Army Patriot missile battery’s computer picked up one of the two jets, and decided it was an enemy missile flying straight down toward it. The system flashed alerts in front of its human crew, telling them they were in danger. They fired.

Watson saw a flash and immediately wrenched his plane to the right, firing off flares meant to distract heat-seeking missiles. But the missile wasn’t targeting him. It shot up and slammed into Main and Williams’s plane, killing them before they had time to eject, a Department of Defense investigation later concluded.

“It’s not something I’ll ever forget,” Watson, who left the Royal Air Force in the mid-2000s and is now a leadership coach, recounted in an interview recently. “As a squadron commander, they were my guys.”

Patriot missile crews were warned about operating on autonomous mode, but it took another friendly-fire incident almost two weeks later, when the system shot down and killed U.S. Navy F-18 pilot Nathan Dennis White, for strict rules to be put in place that effectively stopped the missile batteries from operating for the remainder of the war.

Weapons like the Patriot usually involve a computer matching radar signatures against a database of planes and missiles, then deciding whether the object is a friend or foe. Human operators generally make a final call on whether to fire, but experts say the stresses of combat and the tendency to trust machines often blurs the line between human and computer control.

Google bans development of artificial intelligence used in weaponry

“We often trust computer systems; if a computer says I advise you to do this, we often trust that advice,” said Daan Kayser, an autonomous weapons expert at Dutch peace-building organization PAX. “How much is the human still involved in that decision-making?”

The question is key for the U.S. military, which is charging ahead on autonomous weapons research but maintains that it won’t ever outsource the decision to kill to a machine.

In 2012, the Defense Department issued guidelines for autonomous weapons, requiring them “to allow commanders and operators to exercise appropriate levels of human judgment.”

Though a global, binding treaty restricting autonomous weapons looks unlikely, the fact that governments and weapons companies are stressing that humans will remain in control shows that awareness around the risks is growing, said Mary Wareham, a Human Rights Watch director who for years led the Campaign to Stop Killer Robots, an international effort to limit autonomous weapons.

And just like land mines, chemical weapons and nuclear bombs, not every country needs to sign a treaty for the world to recognize using such weapons goes too far, Wareham said. Though the United States has refused to sign on to a 2010 ban against cluster munitions, controversy around the weapons led U.S. companies to voluntarily stop making them.

The U.S. system created the world’s most advanced military. Can it maintain an edge?

Still, the pandemic has slowed those efforts. A meeting in Geneva scheduled for the end of June to get discussions going again was recently postponed.

The U.S. and British militaries both have programs to build “swarms” of small drones that operate as a group using advanced AI. The swarms could be launched from ships and planes and used to overwhelm a country’s defenses before regular troops invade. In 2017, the Pentagon asked for proposals for how it could launch multiple quadcopters in a missile, deposit them over a target and have the tiny drones autonomously find and destroy targets.

“How can you control 90 small drones if they’re making decisions themselves?” Kayser said. Now imagine a swarm of millions of drones.

The U.S. military has also experimented with putting deep-learning AI into flight simulators, and the algorithms have shown they can match the skills of veteran human pilots in grueling dogfights. The United States says AI pilots will only be used as “wingmen” to real humans when they’re ready to be deployed.

Similar to other areas where artificial intelligence technology is advancing, it can be hard to pinpoint exactly where the line between human and machine control lies.

“Just like in cars, there is this spectrum of functionality where you can have more autonomous features that can be added incrementally that can start to, in some cases, really blur the lines,” said Paul Scharre, a former Special Operations soldier and vice president and director of studies at the Center for a New American Security. He also helped draft the Pentagon’s guidelines on autonomous weapons.

Tech leaders: Killer robots would be ‘dangerously destabilizing’ force in the world

Autonomy slowly builds as weapons systems get upgraded over time, Scharre said. A missile that used to home in on a single enemy might get a software upgrade allowing it to track multiple targets at once and choose the one it’s most likely to hit.

Technology is making weapons smarter, but it’s also making it easier for humans to control them remotely, Scharre said. That gives humans the ability to stop missiles even after they’re launched if they realize after the fact they might hit a civilian target.

Still, the demand for speed in war will inevitably push militaries to offload more decisions to machines, especially in combat situations, Kayser said. It’s not hard to imagine opposing algorithms responding to each other faster than humans can monitor what’s happening.

“You saw it in the flash crashes in the stock market,” Kayser said. “If we end up with this warfare going at speeds that we as humans can’t control anymore, for me that’s a really scary idea. It’s something that’s maybe not even that unrealistic if these developments go forward and aren’t stopped.”