Technology

Scientists, experts saying mitigating ‘extinction’ risk of AI should be global priority

A group of artificial intelligence (AI) experts and industry leaders are warning that AI could pose an existential threat to humanity and that mitigating its risks should be a “global priority.”

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the one-sentence statement read

The open letter was released Tuesday by a nonprofit organization, the Center for AI Safety, and was signed by more than 350 AI leaders, experts, and engineers — including chief executives from leading AI companies: Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic.

The latest warning comes as worry about the potential harms posed by the rapidly developing technology has spread through the industry and beyond.  

Earlier this month, Altman testified before a Senate subcommittee about the potential harms of AI and implored lawmakers to pose regulations for the industry.


In an interview with The New York Times, Center for AI Safety Executive Director Dan Hendrycks described the letter as a “coming out” for some in the industry who have stayed mum publicly as others have issued warnings about the harm AI could pose. 

“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Hendrycks told The Times. “But, in fact, many people privately would express concerns about these things.”

Earlier this year, more than 1,000 industry members signed a letter calling for a six-month pause on AI development, arguing it poses “profound risks to society and humanity.”