The Washington PostDemocracy Dies in Darkness

AI poses ‘risk of extinction’ on par with nukes, tech leaders say

Dozens of tech executives and researchers signed a new statement on AI risks, but their companies are still pushing the technology

Updated May 30, 2023 at 5:25 p.m. EDT|Published May 30, 2023 at 9:29 a.m. EDT
OpenAI CEO Sam Altman, testifying before a Senate panel earlier this month, is among more than 350 signatories of an open letter on the risks that AI poses to humanity. (Jabin Botsford/The Washington Post)
5 min

Hundreds of artificial intelligence scientists and tech executives signed a one-sentence letter that succinctly warns AI poses an existential threat to humanity, the latest example of a growing chorus of alarms raised by the very people creating the technology.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” according to the statement released Tuesday by the nonprofit Center for AI Safety.

The open letter was signed by more than 350 researchers and executives, including Sam Altman, who is CEO of the ChatGPT creator OpenAI, as well as 38 members of Google’s DeepMind artificial intelligence unit.

Altman and others have been at the forefront of the field, pushing new “generative” AI to the masses, such as image generators and chatbots that can have humanlike conversations, summarize text and write computer code. OpenAI’s ChatGPT bot was the first to launch to the public in November, kicking off an arms race that led Microsoft and Google to launch their own versions earlier this year.

Since then, a growing faction within the AI community has been warning about the potential risks of a doomsday-type scenario where the technology grows sentient and attempts to destroy humans in some way. They are pitted against a second group of researchers who say this is a distraction from problems like inherent bias in current AI, the potential for it to take jobs and its ability to lie.

Skeptics also point out that companies that sell AI tools can benefit from the widespread idea that they are more powerful than they actually are — and they can front-run potential regulation on shorter-term risks if they hype up those that are longer term.

Dan Hendrycks, a computer scientist who leads the Center for AI Safety, said the single-sentence letter was designed to ensure the core message isn’t lost.

“We need widespread acknowledgment of the stakes before we can have useful policy discussions,” Hendrycks wrote in an email. “For risks of this magnitude, the takeaway isn’t that this technology is overhyped, but that this issue is currently underemphasized relative to the actual level of threat.”

In late March, a different public letter gathered more than 1,000 signatures from members of the academic, business and technology worlds who called for an outright pause on the development of new high-powered AI models until regulation could be put into place. Most of the field’s most influential leaders didn’t sign that one, but they have signed the new statement, including Altman and two of Google’s most senior AI executives: Demis Hassabis and James Manyika. Microsoft Chief Technology Officer Kevin Scott and Microsoft Chief Scientific Officer Eric Horvitz both signed it as well.

Notably absent from the letter is Google CEO Sundar Pichai and Microsoft CEO Satya Nadella, the field’s two most powerful corporate leaders.

Pichai said in April that the pace of technological change may be too fast for society to adapt, but he was optimistic because the conversation around AI risks was already happening. Nadella has said that AI will be hugely beneficial by helping humans work more efficiently and allowing people to do more technical tasks with less training.

Industry leaders are also stepping up their engagement with Washington power brokers. Earlier this month, Altman met with President Biden to discuss AI regulation. He later testified on Capitol Hill, warning lawmakers that AI could cause significant harm to the world. Altman drew attention to specific “risky” applications including using it to spread disinformation and potentially aid in more targeted drone strikes.

“These technologies are no longer fantasies of science fiction. From the displacement of millions of workers to the spread of misinformation, AI poses widespread threats and risks to our society,” Sen. Richard Blumenthal (D-Conn.) said Tuesday. He is pushing for AI regulation from Congress.

Hendrycks added that “ambitious global coordination” might be required to deal with the problem, possibly drawing lessons from both nuclear nonproliferation and pandemic prevention. Though a number of ideas for AI governance have been proposed, no sweeping solutions have been adopted.

Altman suggested in a recent blog post that there likely will be a need for an international organization that can inspect systems, test their compliance with safety standards, and place restrictions on their use ― similar to how the International Atomic Energy Agency governs nuclear technology.

Addressing the apparent hypocrisy of sounding the alarm over AI while rapidly working to advance it, Altman told Congress that it was better to get the tech out to many people now while it is still early so that society can understand and evaluate its risks, rather than waiting until it is already too powerful to control.

Others have implied that the comparison to nuclear technology may be alarmist. Former White House tech adviser Tim Wu said likening the threat posed by AI to nuclear fallout misses the mark and clouds the debate around reining in the tools by shifting the focus away from the harms it may already be causing.

“There are clear harms from AI, misuse of AI already that we’re seeing, and I think we should do something about those, but I don’t think they’re … yet shown to be like nuclear technology,” he told The Washington Post in an interview last week.

Pranshu Verma and Cat Zakrzewski contributed to this report.