Technology

AI could endanger humanity in 5 years: Former Google CEO

FILE - Eric E. Schmidt, co-founder of Schmidt Futures, listens on Capitol Hill in Washington, Tuesday, Feb. 23, 2021, during a hearing on emerging technologies and their impact on national security. While technology experts sound the alarm on the pace of artificial-intelligence development, philanthropists — including long-established foundations and tech billionaires, including Schmidt and his wife, Wendy — have been responding with an uptick in grants. (AP Photo/Susan Walsh, File)

Former Google CEO Eric Schmidt said he thinks artificial intelligence (AI) capabilities could endanger humanity within five to 10 years and companies aren’t doing enough to prevent harm, Axios reported Tuesday.

In an interview at Axios’s AI+ Summit, Schmidt compared the development of AI to nuclear weapons at the end of World War II. He said after Nagasaki and Hiroshima, it took 18 years to get a treaty over test bans but “we don’t have that kind of time today.”

AI dangers begin when “the computer can start to make its own decision to do things,” like discovering weapons.

The technology is accelerating at a quick pace. Two years ago, experts warned that AI could endanger humanity in 20 years. But now, Schmidt said experts think it could be anywhere from two to four years away.

Schmidt suggested the creation of a global entity, similar to the Intergovernmental Panel on Climate Change, to “feed accurate information to policymakers” so the urgency of the issue is understood.


Despite his warnings, the former Google CEO thinks AI can still be a tool for humanity.

“I defy you to argue that an AI doctor or an AI tutor is a negative,” he told Axios. “It’s got to be good for the world.”

Schmidt’s warnings come less than a month after President Biden signed a sweeping executive order on AI. The order includes several new actions including safety, privacy, protecting workers and protecting innovations.

The new order puts new standards for safety, including requiring companies to develop models for national security threats, economic security and public health security. It requires the Commerce Department to develop guidance for AI-generated content and directs federal agencies to preserving privacy of data gathered by AI systems.

It aims to protect workers by minimizing harm and maximizing benefits that AI can produce for employees, among other advancements.

Google, the company Schmidt left in 2011, released its own AI tool called Bard, a rival product to the very popular ChatGPT tool by OpenAI.

The order and Schmidt’s warnings come as AI capabilities are rapidly evolving and platforms have made tools widely accessible and popular among users.