The views expressed by contributors are their own and not the view of The Hill

Either the law will govern AI, or AI will govern the law

The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston.
The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)

The release of ChatGPT nearly one year ago in November 2022 ushered in wide recognition that the Age of AI had arrived. For years, there has been a contentious global debate on the future of artificial intelligence (AI) governance. The urgency for action is obvious, as we stand at a crossroads: either the law will govern AI, or AI will govern the law.

This week, the Senate Committee on Homeland Security and Government Affairs tackled this issue head on in a committee hearing on “The Philosophy of AI: Learning From History, Shaping Our Future.” Led by Committee Chairman Gary Peters (D-Mich.), the hearing focused the conversation on the philosophical and historical dimensions of AI governance. The hearing specifically explored the promise and limitations of both the law and AI.

Part of what makes AI so challenging to regulate is that the systems reach far beyond their technical components and specific products. The impact of AI, when seen as a knowledge structure, can be better understood as a philosophical force. Generative AI and machine learning, algorithms, and other subsets of AI, do not operate absent the context through which they are developed and implemented. They are informed and learn by digesting collective narratives and can reflect existing hierarchies based on preexisting historical, philosophical, political and socioeconomic structures. Acknowledging this allows us to visualize how AI may perpetuate inequities and antidemocratic values that constitutional democracies have sought to correct for generations.

In the past year, billions of dollars have flooded AI investments, further fueling the need for a dialogue on rights-based AI governance. To do this, we can look toward The Constitution as an example. 

The U.S. Constitution is inspired by a philosophy of how to guarantee rights and constrain power. It separates and decentralizes power, and installs checks and balances, to avoid power abuses. AI must be viewed in much the same way. Both the Constitution and AI are highly philosophical. Putting them side-by-side allows us to understand how they might be in tension with each other on a philosophical level. If we look at AI as only a technology, we will miss how AI can transform into a governing philosophy that attempts to rival the governing philosophy of a constitutional democracy.

In a constitutional democracy, the rule of law precedes power. Our Founders supported a vision of equality, inalienable rights, and self-governance. The signing of the Declaration of Independence, the Constitutional Convention, and the ratification of the Bill of Rights were products of a deep historical and philosophical struggle. Going forward, the point of decision is: will AI be applied in a way that is consistent with our constitutional philosophy, or will it alter it, erode it, or mediate it? 

AI is already being deployed for governance purposes. Increasingly, fundamental rights such as expressive rights will flow through technology companies as conduits, and can be altered, reshaped and weaponized to undermine democratic values. We are at a critical juncture where we must grapple with whether constitutional rights were meant to be mediated through commercial enterprises and AI technologies in this way.

AI is not only a knowledge structure, it is also a power and market structure. As the capacities of AI evolve, several risks will grow exponentially and more rapidly than we can anticipate: reexamining definitions of personhood and citizenship; erosion of privacy; separating reality from fiction; carefully choreographed dis/misinformation campaigns; AI displacement of human knowledge, judgment and labor; and others we likely can’t even begin to predict. The national security risks are particularly concerning, as it is highly susceptible to cognitive manipulation abuses by foreign adversaries and violent extremism domestically.

The humanities and philosophies that have underscored our “analogue democracy” must serve as our guide in a “digital democracy.” If we look at AI too literally as only a technology, we run the risk of not fully grasping its impact as a potential challenge facing our society. When a philosophy like a constitutional democracy can speak to a philosophy of AI, it is easier to comprehend how they may not be consistent with one another. We may miss how AI as a governing philosophy might attempt to rival or compete with the governing philosophy of a democracy. 

From history, we know that the law can be bent and contorted, especially when structures of power evolve into an ideology. The foundational principles of a constitutional democracy provide a touchstone for analysis at this critical moment when AI oversight decisions must be made. In a constitutional democracy, there is only one answer to the question of whether the law will govern AI or AI will govern the law. The law must govern AI.

Margaret Hu is Taylor Reveley Research Professor and Professor of Law and the director of the Digital Democracy Lab at William & Mary Law School. Professor Hu testified with Professor Shannon Vallor, University of Edinburgh, and Professor Daron Acemoglu, MIT,  on Nov. 8 in a hearing for the Senate Committee on Homeland Security and Government Affairs on the philosophy of AI and governance.

Tags Artificial intelligence

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more