The views expressed by contributors are their own and not the view of The Hill

We have the basis for an international AI treaty

Getty Images


Is an international AI treaty on the horizon and, if so, what should it look like? Organizations worldwide are publishing principles to guide artificial intelligence research and development; several have appeared in recent months. Surprisingly, these principles reflect a broad international consensus — 10 of the 12 most common principles are put forth by 90 percent of the organizations — ranging from the Chinese government to the European Union and including Microsoft and Google DeepMind.

Last month, G-20 ministers in charge of trade and digital economy released a statement that included principles for Human-Centered Artificial Intelligence. In May 2019, the United States and 41 other countries endorsed the OECD principles, which could serve as an initial draft for a treaty.

The articulation of AI principles varies. Some appear as simple, one-sentence statements; others are long-form declarations. Some are organization specific; others are country specific. Some cover design, use and governance specifically; others cover all three areas without delineation. 

The 12 most common principles are summarized by organization in this table. They are: 

  • uphold human rights and values; 
  • promote collaboration; 
  • ensure fairness;
  • provide transparency; 
  • establish accountability; 
  • limit harmful uses of AI; 
  • ensure safety; 
  • acknowledge legal and policy implications; 
  • reflect diversity and inclusion; 
  • respect privacy; 
  • avoid concentrations of power, and 
  • contemplate implications for employment.

All but OpenAI’s principles discuss transparency, accountability, diversity and inclusion, and privacy. The two principles with least consensus are avoid concentrations of power, mentioned by all except Microsoft and Google, and contemplate implications for employment, mentioned by six of the 10 organizations we reviewed. 

The broad consensus does have some gaps; current principles need to improve specificity and broach the issue of enforcement. Although there are strongly shared values, there is consistently vague articulation. The Microsoft principles include: “AI systems should empower everyone and engage people.” The Google principles state: “As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.” Although most people would find it hard to argue the merits of these principles, it is difficult to implement something to “benefit all humankind,” for example, or to discern what constitutes “a broad range of social and economic factors.” 

Among the principles, there are several unique points worth considering. The Future of Life principles call for avoiding an AI arms race in lethal autonomous weapons. The Beijing principles mention avoiding monopolies. Education was discussed within several sets of principles. Recommendations range from creating broader curriculum for engineers and technologists to educating stakeholders to help them adapt to the impact of AI technologies. IEEE recommends creating systems for registering autonomous technology capturing key parameters, such as intended use and training data. And OpenAI discusses the need for technical leadership to be effective at addressing societal impacts. 

Continued discussion to define and refine principles is both timely and important given the emerging consensus. Here we advocate discussion of specific, actionable principles (some of which have been written about to move our thinking forward. 

In line with the 12 common principles, we propose that AI shall not retain or disclose confidential information without explicit prior approval from the source. AI must not increase any bias that already exists in our systems. AI, in all its forms (robotic, autonomous systems, embedded algorithms), must be accountable, interpretable and transparent so people can understand the decisions machines make — and fully autonomous offensive weapons should be banned.

In addition, all AI systems should have an impregnable off switch. AI should be subject to the full gamut of laws that apply to its human operator. AI shall clearly disclose it is not human. And AI applications (e.g., cars, toys) should be individually regulated, rather than trying to regulate the field of AI more broadly.

The surprising commonality in principles expressed across governments, professional associations, research labs and private corporations suggests that we have the basis for an international treaty on AI. The recent interest and rapid progress suggest that we need to move swiftly from vague principles to concrete action.   

Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence in Seattle. A senior researcher in the field of AI and computer science, he is a professor at the University of Washington. Follow him on Twitter @etzioni.

Nicole DeCario is senior assistant to the CEO at the Allen Institute for Artificial Intelligence. Follow her on Twitter @NicoleDeCario.

Tags Artificial intelligence Ethics of artificial intelligence

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more