Progress in AI technology, according to one of its leading researchers, will bring about “an equally large transformation” as the electricity revolution did roughly a century ago. That’s not merely hype. America’s technology giants are spending billions to “remake themselves around AI.” It’s no surprise, then, that military and intelligence organizations around the world are looking to do the same. In a study I co-authored on AI and national security for the U.S. Intelligence Community, we found that AI technology is poised to deliver revolutionary capabilities across the landscape of warfare and espionage.
Global leaders have taken notice. This past Monday, The United Nations’ Convention on Certain Conventional Weapons met in Geneva to discuss potential restrictions on robotic weapons systems using AI. Not everyone is eager to participate. In September, Russian President Vladimir Putin said that “Artificial intelligence is the future. Whoever becomes the leader in this sphere will become the ruler of the world.” True to form: Russia has dramatically increased spending on robotic weapons platforms. Russia plans for 30 percent of its military to be robotic by 2030.
{mosads}Russia is far from alone. In July, China’s government released its national strategy for AI, which calls for spending billions annually in pursuit of an AI “military-civil fusion,” whereby leading Chinese technology firms cooperate with the military to develop AI weapons. China’s strategy embraces dual-use AI technologies, of which there are many. AI-enabled face recognition is a harmless delight when it allows teenagers using Snapchat to picture a flower crown on their head, but China uses the same capabilities for domestic surveillance. Thanks to facial recognition systems, citizens who jaywalk across a street in China are liable to have their face and name appear on a nearby screen, alongside a police warning.
Even terrorist groups are using commercial AI and robotics technologies to provide capabilities that were once restricted to nation states. For more than a year, the Islamic State has been adding explosives to consumer drones to provide a cheap and crude version of a cruise missile. Such drones get cheaper and more capable every year.
The United States would also like to accelerate the integration of AI technology into its military plans and capabilities. Defense Secretary Jim Mattis, however, has expressed frustration at the Pentagon’s slow pace in bringing the best of Silicon Valley to the Defense Department. In the wake of the Edward Snowden revelations and Trump’s election, many worry the U.S. military’s interest in AI is part of the problem.
This is a shame. National security officials need the advice and counsel of the AI research community on how to utilize technology ethically and effectively. At the same time, AI researchers need to hear from the national security community about the full implications of the technologies they pioneer. Congress has a critical role to play in fostering this conversation. We are past due for congressional hearings on the national security implications of AI.
As with any technology revolution, increased adoption of AI brings opportunities and risks. Unfortunately, the loudest voices in the conversation are focused on the Terminator movie scenario, where a super-intelligent AI system achieves godlike power and pursues the humanity’s extinction. It’s true AI researchers are growing increasingly confident that AI may surpass human intelligence sometime in the 21st century, but AI systems could pose a threat to global security long before that. In 1980, a computer glitch almost caused World War III when America’s missile warning system falsely reported data from a training exercise as a Soviet nuclear first strike. The computer and software powering that system was much “dumber” than today’s smartphones. But, “smart” AI systems can make big mistakes too, as when the AI software underlying Google’s Photos app mistakenly classified images of dark-skinned people as “gorillas.”
It is not only smartness or dumbness that entails risk, but the uncertainty and lack of operational safety experience that accompanies any new technology. Electric light bulbs are today much less likely to cause a building to catch fire than candles ever were, but in the early days of electricity, fires were rampant. The pressure to sideline safety concerns is far greater in the national security context, where falling behind in a key technology could bring about not merely the loss of market share, but the loss of a war.
We are in the early days of the military AI revolution. The correct path forward is far from clear. But the need for a frank conversation is abundantly clear. Congress should play its part.
Gregory C. Allen is an adjunct fellow at the Center for a New American Security. He is the co-author of “Artificial Intelligence and National Security,” published by the Harvard Belfer Center for Science and International Affairs.