The views expressed by contributors are their own and not the view of The Hill

As machine learning becomes standard in military and politics, it needs moral safeguards

Artificial intelligence wavelength
iStock

Over the past decade, the world has experienced a technological revolution powered by machine learning (ML). Algorithms remove the decision fatigue of purchasing books and choosing music, and the work of turning on lights and driving, allowing humans to focus on activities more likely to optimize their sense of happiness. Futurists are now looking to bring ML platforms to more complex aspects of human society, specifically warfighting and policing

Technology moralists and skeptics aside, this move is inevitable, given the need for rapid security decisions in a world with information overload. But as ML-powered weapons platforms replace human soldiers, the risk of governments misusing ML increases. Citizens of liberal democracies can and should demand that governments pushing for the creation of intelligent machines for warfighting include provisions maintaining the moral frameworks that guide their militaries. 

In his popular book “The End of History,” Francis Fukuyama summarized debates about the ideal political system for achieving human freedom and dignity. From his perspective in the middle of 1989, months before the unexpected fall of the Berlin Wall, no other systems like democracy and capitalism could generate wealth, pull people out of poverty and defend human rights; both communism and fascism had failed, creating cruel autocracies that oppressed people. Without realizing it, Fukuyama prophesied democracy’s proliferation across the world. Democratization soon occurred through grassroots efforts in Asia, Eastern Europe and Latin America. 

These transitions, however, wouldn’t have been possible unless the military acquiesced to these reforms. In Spain and Russia, the military attempted a coup before recognizing the dominant political desire for change. China instead opted to annihilate reformers. 

The idea that the military has veto power might seem incongruous to citizens of consolidated democracies. But in transitioning societies, the military often has the final say on reform due to its symbiotic relationship with the government. In contrast, consolidated democracies benefit from the logic of Clausewitz’s trinity, where there is a clear division of labor between the people, the government and the military. In this model, the people elect governments to make decisions for the overall good of society while furnishing the recruits for the military tasked with executing government policy and safeguarding public liberty. The trinity, though, is premised on a human military with a moral character that flows from its origins among the people. The military can refuse orders that harm the public or represent bad policy that might lead to the creation of a dictatorship.

ML risks destabilizing the trinity by removing the human element of the armed forces and subsuming them directly into the government. Developments in ML have created new weapons platforms that rely less and less on humans, as new warfighting machines are capable of provisioning security or assassinating targets with only perfunctory human supervision. The framework of machines acting without human involvement risks creating a dystopian future where political reform will become improbable, because governments will no longer have human militaries restraining them from opening fire on reformers. These dangers are evident in China, where the government lacks compunction in deploying ML platforms to monitor and control its population while also committing genocide. 

In the public domain, there is some recognition of these dangers on the misuses of ML for national security. But there hasn’t been a substantive debate about how ML might shape democratic governance and reform. There isn’t a nefarious reason for this. Rather it’s that many of those who develop ML tools have STEM backgrounds and lack an understanding of broader social issues. From the government side, leaders in agencies funding ML research often don’t know how to consume ML outputs, relying instead on developers to explain what they’re seeing for them. The government’s measure for success is whether it keeps society safe. Throughout this process, civilians operate as bystanders, unable to interrogate the design process for ML tools used for war. 

In the short term, this is fine because there aren’t entire armies made of robots, but the competitive advantage offered by mechanized fighting not limited by frail human bodies will make intelligent machines essential to the future of war. Moreover, these terminators will need an entire infrastructure of satellites, sensors, and information platforms powered by ML to coordinate responses to battlefield advances and setbacks, further reducing the role of humans. This will only amplify the power governments have to oppress their societies.

The risk that democratic societies might create tools that lead to this pessimistic outcome is high. The United States is engaged in an ML arms race with China and Russia, both of which are developing and exporting their own ML tools to help dictatorships remain in power and freeze history.

 There is space for civil society to insert itself into ML, however. ML succeeds and fails based on the training data used for algorithms, and civil society can collaborate with governments to choose training data that optimizes the warfighting enterprise while balancing the need to sustain dissent and reform. 

By giving machines moral safeguards, the United States can create tools that instead strengthen democracy’s prospects. Fukuyama’s thesis is only valid in a world where humans can exert their agency and reform their governments through discussion, debate and elections. The U.S., in the course of confronting its authoritarian rivals, shouldn’t create tools that hasten democracy’s end.  

Christopher Wall is a social scientist for Giant Oak, a counterterrorism instructor for Naval Special Warfare, a lecturer on statistics for national security at Georgetown University and the co-author of the recent book, “The Future of Terrorism: ISIS, al-Qaeda, and the Alt-Right.” Views of the author do not necessarily reflect the views of Giant Oak. 

Tags Artificial intelligence Cybernetics Emerging technologies Francis Fukuyama Internet of Things Machine learning

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more