The views expressed by contributors are their own and not the view of The Hill

A balanced AI governance vision for America

iStock

Artificial intelligence (AI) holds the promise of fueling explosive economic growth and improving public health and welfare in profound ways—but only if we let it. 

Worryingly, the current dialogue around AI lacks nuance and could lead to stagnation rather than innovation. There are AI decelerationists on one side who seek to slow algorithmic progress using unworkable “pauses” and heavy-handed regulatory proposals. In contrast, the accelerationists want to let ‘er rip and hope everything will be fine without important development guidelines. But to make progress on AI innovation and its governance, America needs a better approach than these all-or-nothing extremes.

We need a more balanced strategy, especially if the United States hopes to maintain and extend the advantages it enjoys globally in digital technology sectors as China and other nations look to catch up in the unfolding computational revolution. The United Kingdom, for example, just launched a bold new policy framework “to turbocharge growth” using a “pro-innovation approach to AI regulation,” noting that, “a heavy-handed and rigid approach can stifle innovation and slow AI adoption.”

Unfortunately, the Biden administration’s proposed “AI Bill of Rights” mostly stresses possible dangers over potential opportunities, arguing that AI systems “threaten the rights of the American public.” Unsurprisingly, this effort focuses on the alleged need for new government mandates with less attention paid to innovation. Meanwhile, the Department of Commerce just launched a new proceeding on “AI accountability,” and in Congress, Senate Majority Leader Chuck Schumer (D-N.Y.) is apparently pushing for a new law to legislate “responsible AI.” 

These efforts are focused on demanding algorithmic “explainability” and other amorphous requirements, which would require government meddling with fast-moving computational processes. This could grow to become a cumbersome and slow regulatory approval process that would undermine AI advances. U.S. policymakers should reject these types of top-down mandates and take a more open-minded approach to AI by using flexible, iterative, bottom-up governance solutions to address algorithmic concerns. Many different actors and mechanisms can play a role in ensuring safer AI systems without derailing innovation. 

Various government policies and bodies already exist to address algorithmic concerns, as outlined in a new R Street Institute study. The U.S. has 15 Cabinet agencies, 50 independent federal commissions, and over 430 federal departments altogether, many of which already consider how AI touches their field. Consumer protection agencies, like the Federal Trade Commission and comparable state offices, are also taking steps to oversee potentially unfair and deceptive algorithmic practices. Regulatory agencies like the National Highway Traffic Safety Administration, the Food and Drug Administration, and Consumer Product Safety Commission also have broad oversight and recall authority, allowing them to remove defective or unsafe products from the market. 

Algorithmic systems will be governed by these policies as well as court-based tools like contract and property law, torts, and products liability. Common law will adapt to address new technological realities for AI and robotics, just as it already did with consumer electronics, computing, the internet and many other technologies. This uniquely American approach to flexible governance came about in the 1990s through a bipartisan freedom-to-innovate vision sketched out by the Clinton administration and a Republican Congress. It has kept our nation on the cutting edge of high-tech innovation ever since.

Meanwhile, professional bodies and standards-setting bodies have created robust best practice frameworks for AI and robotics. Organizations such as the Association of Computing Machinery, the Institute of Electrical and Electronics Engineers, the International Organization for Standardization and UL have all developed ethical guidelines to ensure “ethics-by-design” (incorporating privacy, safety and antidiscrimination guidelines). Major tech trade associations and companies have also formulated governance codes of conduct for AI development and use. 

The government’s best role lies in convening different stakeholders to work toward consensus best practices on an ongoing basis. The National Institute of Standards and Technology has developed an AI Risk Management Framework, a consensus-driven strategy to help build more trustworthy algorithmic systems and “adapt to the AI landscape as AI technologies continue to develop.” This pragmatic approach is “designed to be responsive to new risks as they emerge” instead of attempting to solve them all in advance, which would both be impossible and undermine important AI innovations. Additional policy steps can always be adopted on an as-needed basis.

Finally, the most effective solution to technological problems usually lies in more innovation, not less. Developers also have powerful reputational incentives to improve the safety and security of their systems to avoid not only punishing liability, but also unwanted press attention and lost customers.

Real risks remain, of course, and a culture of AI ethics-by-design is critical. AI accelerationists sometimes too casually dismiss legitimate concerns about how powerful computational systems will create challenges that necessitate ongoing solutions. However, regulatory-minded decelerationists often forget that there is an equally compelling public interest in ensuring that algorithmic innovations are developed and made available to society. 

America does not need more bureaucracy or thicket of new rules for AI. We are on the cusp of untold advances in nearly every field thanks to AI. Our success depends on using flexible governance and practical solutions to avoid diminishing the pro-innovation model central to U.S. success in the technology sector.

Adam Thierer is a senior fellow for the technology and innovation team at the R Street Institute

Tags

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more