The views expressed by contributors are their own and not the view of The Hill

Should the government regulate artificial intelligence? It already is

Getty


As nearly every day brings additional news about how artificial intelligence (AI) will affect the way we live, a heated debate has broken out over what the United States should do about it. On the one hand, the likes of Elon Musk and Stephen Hawking argue that we must regulate now to slow down and develop general principles governing AI’s development because of its potential to cause massive economic dislocation and even destroy human civilization.

On the other hand, AI advocates argue that there is no consensus on what AI is, let alone what it can ultimately do. Regulating AI in such circumstances, these advocates claim, will simply stifle innovation and cede to other countries the technological initiative that has done so much to power the U.S. economy.

{mosads}The intense focus on these foundational questions threatens to obscure, however, a key point: AI is already subject to regulation in many ways, and, even while the broader debates about AI continue, additional regulations look sure to follow. These regulations aren’t the sort of broad principles that Musk and Hawking urge and AI advocates fear: There’s nothing on the books as dramatic “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” This is the first of Isaac Asimov’s famed three laws of robotics.

Thus far, most of the rules aren’t particular to AI at all. Rather, they are existing and sometimes longstanding privacy, cybersecurity, unfair and deceptive trade acts and practices, due process, and health and safety rules that cover technologies that now happen to be considered “AI.” These include rules about holding, using and protecting personal data, guidance on how to manage the risks caused by financial algorithms, and protections against discrimination.

To be sure, many of these rules are of the subjects of intense debate over, for example, whether they sufficiently protect consumers. The application of these existing legal frameworks and regulatory schemes to AI technologies can present difficult questions. For example, how do human-centric concepts like intent apply to robots? But even the more recent enactments that do specifically address AI, such as the many state laws governing autonomous vehicles, shy away from making general pronouncements about AI technology, instead choosing to target particular risks caused by specific applications.

This also seems to be the direction in which Congress is headed. As AI has taken off, Capitol Hill has largely held back, at least until the second half of 2017, when members introduced three separate pieces of AI-related legislation: the House-passed Self Drive Act, which addresses the safety of automated vehicles, the AV Start Act, a bipartisan Senate companion that similarly tackles driverless cars, and the Future of AI Act, a bipartisan Senate bill that would create an advisory committee on AI issues.

While all of these bills acknowledge the potential dislocations that animate Musk and Hawking’s concerns, they shy away from broad pronouncements about AI generally in favor of further study and a focus on addressing sector-specific questions as they arise. Indeed, the most general bill, the Future of AI Act, would merely establish a generalist body to study and provide advice on AI issues.

Conversely, the other two bills feature the statutory provisions with immediate impact through the preemption of certain state laws to ensure that the path is clear for innovation without the complications caused by disparate state regulatory regimes. These bills, the ones with actual bite, focus exclusively on the sector where states have begun to take an active role and where technology is already poised to have a near-term and real-world impact, which is automated vehicle technology.

It is tempting to look at these developments and conclude that industry and innovators would be safe in continuing to comply with the laws that affect them today while waiting to see what Congress does if and when it decides to focus on the particular type of AI technology that they are developing, in other words, when their sector is under the legislative gun, as autonomous vehicle technology is today. But such an approach would be misguided.

The consideration of the bills in Congress discussed above shows that legislators are seized of the many issues presented by the rapid development of AI technology. While Congress is taking incremental steps for the time being, the processes these bills set in motion could have long-term impacts. Even if these bills don’t immediately or directly affect a company’s sector, they could still have pathsetting effects.

Decisions made today may have substantial ripple effects that legislators could easily miss on the development of AI technology down the road. Who could have possibly imagined the full implications of Section 230 of the Communications Decency Act when it was enacted in 1996? Or the effect of the Electronic Communication Privacy Act’s warrant requirement for emails less than 180 days old in 1986? Early legislative enactments about new technologies tend to persist.

The very vocabulary that regulators are beginning to use in these bills could have a lasting impact on the way that regulators view and treat AI technologies more generally. If companies don’t, for example, establish an AI lexicon that will help legislators or regulators understand, meaningfully describe, and distinguish between technologies that should be regulated differently, those legislators and regulators may very well develop such a lexicon themselves. Likewise, if companies don’t make legislators or regulators aware of their industry best practices and model policies or codes of conduct, there’s no chance those can serve as a guide as legislators look for models that work.

To the extent that existing legal regimes are affecting AI innovations and the frameworks within which they are being developed, there is no time better than the present for bringing to the attention of regulators the ways those existing frameworks are or are not working. The AI regulatory agenda may be set early. The AI community should take notice. Regulation is not just to come. It’s already here.

Christopher Fonzone is a partner in the privacy and cybersecurity group at Sidley Austin. He served as deputy assistant to President Obama, deputy White House counsel, and legal adviser to the National Security Council.

Kate Heinzelman is a member of the privacy and cybersecurity group at Sidley Austin. She served as special assistant to President Obama, associate White House counsel, and clerked for Chief Justice John Roberts.

The opinions expressed here are those of the authors and not of the firm.

Tags Artificial intelligence Business Congress Elon Musk Government Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more