The views expressed by contributors are their own and not the view of The Hill

AI regulation is not a silver bullet

Getty Images

Last month, Sam Altman, CEO of OpenAI, the company that gave the world ChatGPT and all the headaches thereafter, pleaded before the United States Senate Judiciary subcommittee for the U.S. Congress to regulate “increasingly powerful” artificial intelligence (AI) systems. A week later, the Biden-Harris administration announced new efforts “to advance the research, development and deployment of responsible AI that protects individuals’ rights and safety and delivers results for the American people.”

This new missive builds on the administration’s AI Bill of Rights from October 2022 and Senate Majority Leader Chuck Schumer’s (D-N.Y.) framework for AI legislation in April 2023. However, none of these efforts constitute actual legislation; it sounds remarkably similar to a university faculty or corporate meeting in which one plans to make a plan. Other than state/local-level legislation like that in New York City to prevent bias in employment decisions due to AI, most AI regulatory efforts thus far have been advisory in nature. Is this current patchwork the best way to approach regulation on something that is poised to alter our information economy, impact knowledge worker employment, and exacerbate the ills of technology on society?

As U.S. lawmakers undertake their revolving merry-go-round of confabulations, the European Parliament is on the verge of enacting the European Union Artificial Intelligence Act. While the EU AI Act contains innovative ideas that can inform global regulatory efforts, such as rules for different AI systems based on their respective levels of threats, it risks creating a regulatory chokehold on the technology industry. Regulations have unintended consequences; take the EU’s data protection law, the General Data Protection Regulation (GDPR). Only deep-pocketed companies have the resources to follow its complex rules, tilting the scales in favor of Big Tech while small businesses struggle to comply. The GDPR shows mixed results at best that should give us pause.

The Chinese approach to the technology sector has been to champion local companies and create a blooming walled garden. In AI, the approach of the People’s Republic of China is similar; the country’s political leaders and its battery of state-owned enterprises and academic institutions are focusing on AI-with-Chinese characteristics, hoping to win the global AI race. In May 2019, the Beijing AI Principles were published by the Beijing Academy of Artificial Intelligence, an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, in addition to companies including Tsinghua University, Peking University and the country’s biggest tech behemoths, Alibaba, Baidu and Tencent. China has also been quick off the block to regulate generative AI, seeking to impose measures like prior government approval before the release of any ChatGPT-like products. As China accelerates its international trade and investment policy — the Belt and Road Initiative — its rules on AI may also get exported to countries around the world.

Countries as diverse as the Brazil, Canada, Germany, Israel, India and the United Kingdom are developing national strategies for the ethical use of AI as well. Among the Gulf countries, the United Arab Emirates, Saudi Arabia and Qatar have also outlined national AI strategies and roadmaps. Our analysis of these efforts makes one thing very clear: No country has it all figured out yet. AI requires updates to our regulatory approach and upgrades to our risk architectures.

AI is both a horizontal technology with broad applications and a dual-use technology that can be put to both good and bad uses. The current approach outside the EU has been mostly about the establishment of guidelines (except in China), but these are not binding, nor do they carry penalties for transgressions. Without enforcement, there is little point in having rules that do nothing but codify norms. 

But rules are not the only way to enforce important norms for society. There are other ways to regulate AI research, development and deployment short of innovation-stifling regulation.

Self-regulating organizations (SROs), wherein industry participants voluntarily establish standards and best practices and agree to abide by them, are one such mechanism by which AI researchers and merchants can be held accountable, albeit to each other. If such SROs have sanction ability, like the Financial Industry Regulatory Authority does for the financial system in the United States, all the better. Organizations that declare their support for and compliance with ethical AI principles and standards would be a great start. Independent audits and third-party certifications of compliance to standards could then define the next level of scrutiny.

There already exists a myriad of product safety, consumer protection and anti-discrimination laws, which apply to products and services that embed AI. When such systems make mistakes, the consequences relate to the context and use case. As an example, autocorrect not working properly carries low stakes; facing charges for a crime because of an AI error, on the other hand, carries massive impact and must be avoided. The bar for AI must be high when the cost of errors and consequences of mistakes is high. That is exactly the level-of-risk approach to regulation currently being considered in the EU. As being contemplated in the U.K., sector-specific regulation can bring contextual granularity to regulation.

Ultimately, regulation has to balance multiple objectives: citizen rights, consumer welfare, technology innovation, economic interests, national security and geopolitical interests, among others. It needs to consider the current and the future. Its scope is local and global. As such, AI rules must align with existing regulatory ethos, institutions and capabilities. AI regulation has to be strategic, not trend-chasing, nor based on the latest shiny AI tool.

Post-ChatGPT, the chorus for AI regulation is reaching a crescendo. The one race that no one should win, however, is the race to the regulatory bottom.

James Cooper is a professor of law at California Western School of Law in San Diego.

Kashyap Kompella, CFA, is CEO of RPA2AI Research and a visiting professor for artificial intelligence at BITS School of Management (BITSoM).

Tags Artificial intelligence ChatGPT Chuck Schumer Regulation Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more