The views expressed by contributors are their own and not the view of The Hill

Europe’s AI Act is going forward: Expect trade disputes to follow

The European Union just took a big step toward regulating artificial intelligence.  

In June, the EU Parliament voted to go forward with its so-called “AI Act,” the final details of which will now be hammered out with the European Commission. There are still some big gaps to fill in. For example, there’s growing support for the EU to narrow its definition of AI. But there’s another issue that needs more attention: whether the EU will accept foreign testing for compliance with regulations on “high-risk” AI. If it doesn’t, there will be a wave of trade litigation over standards that will hurt the chances for global cooperation on AI, and erode trust in these technologies. 

The EU’s regulation divides AI into three risk categories: “unacceptable,” “high” and “low or minimal.” At one end of the continuum, AI is defined as posing an unacceptable risk if it jeopardizes health and safety or violates fundamental human rights. An algorithm that biases a person’s vote is an example. These technologies are prohibited from the EU’s market. 

At the other end of the continuum, AI is defined as posing a low or minimal risk if it’s not related to health and safety, or fundamental human rights. An example would be an AI-based video game. These technologies have to live up to codes of conduct like the voluntary standards recently agreed on by companies in the U.S.  

Then there’s high-risk AI. This is where the action is. AI is defined as posing a high risk if it bears on health and safety or fundamental rights, but is shy of being unacceptable. Two annexes to the AI Act detail these. Examples include autonomous vehicles, medical devices and biometric identification systems. Here’s the key: These technologies have to comply with mandatory requirements and prove compliance through a process of conformity assessment.  

For the most interesting types of high-risk AI, conformity assessment has to be conducted by a “third party,” known as a notified body. So again, will the EU accept the results of conformity assessment by competent third parties abroad? 

Despite its extraterritorial reach, the AI Act says almost nothing about this. Article 39 notes that the EU could have “an agreement” with a foreign country, so that its testing facilities “may be authorized” to do conformity assessment. In the EU Parliament’s Draft Compromise Amendments, this is explained to mean that Brussels plans to sign Mutual Recognition Agreements with key trade partners. This would let the EU, on a bilateral basis, accept foreign testing, but it’s not obvious how these agreements would be structured.  

The Technical Barriers to Trade Agreement of the World Trade Organization states that testing by competent conformity assessment bodies abroad should be accepted, even if different procedures are used. Mutual Recognition Agreements, along with unilateral recognition and multilateral recognition, can get this done. Yet, Mutual Recognition Agreements differ in terms of their depth and breadth, and it’s not clear what the EU might offer, on what schedule, or with what phase-in period. For products like medical devices, which are already subject to extensive certification, adding AI might not be a big regulatory stretch. But for other technologies, notably those that cross, if not redefine sectors, Mutual Recognition Agreements will be difficult.  

Not surprisingly, the AI Act drew the attention of Europe’s trade partners as soon as it was notified to the WTO in 2021. The US-EU Trade and Technology Council issued a joint roadmap on AI, calling for non-discrimination in conformity assessment. The U.S. followed this up by raising the issue in both the 2022 and 2023 National Trade Estimate Reports

Then there’s China. Beijing has raised five specific trade concerns at the WTO about every conceivable part of the AI Act, including conformity assessment. Look for other countries to pile on as soon as the EU releases a final text. 

How big a problem is this? The European Commission’s Impact Assessment estimates that 5-15 percent of AI will be defined as high-risk. Yet, this estimate predates the decision by the European Parliament to include environmental considerations in making this call and possibly adding conversational and art-generating AI tools like ChatGPT. 

And that’s the point: There can be no doubt that the definition of high-risk AI will grow for political, not just technological reasons. As it does, compliance and verification costs will balloon from an estimated 6-10 percent of investment for domestic companies, and by much more for foreign ones if the EU demands redundant testing. 

The EU has a herculean task in finalizing the AI Act. Questions about intellectual property, services and digital trade loom especially large. Up against these, the last thing the EU needs right now is a trade dispute over conformity assessment. Protectionism won’t foster greater trust in AI. 

Marc L. Busch is the Karl F. Landegger Professor of International Business Diplomacy at the Walsh School of Foreign Service, Georgetown University, and a global fellow at the Wilson Center’s Wahba Institute for Strategic Competition. Follow him on Twitter @marclbusch. 

Tags Artificial intelligence artificial intelligence regulation Politics of the United States trade disputes

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more