Congress scrambles to craft AI privacy rules as industry races ahead

A House panel will kick off a series of hearings about the role of artificial intelligence (AI) technology Wednesday as the expanding industry leaves lawmakers scrambling to considering regulations.

As companies roll out more AI tools for commercial use, Wednesday’s House Energy and Commerce subcommittee hearing will focus on concerns around how AI systems collect and use data.

Lawmakers will hear from a range of experts and witnesses impacted by the rise in AI, including former Federal Trade Commission (FCC) Chairman Jon Leibowitz and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) member Clark Gregg, an actor known for his role as Phil Coulson in the Marvel Cinematic Universe. 

Amba Kak, executive director of the AI Now Institute, an AI research nonprofit, said lawmakers should be cognizant of the regulatory tools and frameworks already in place to address risks. 

“There’s a tendency in this kind of hype moment to assume that we’re operating from a blank slate. In fact, we already have many of the regulatory tools that we need to govern and regulate AI effectively,” Kak said. 

“The notion that we need to somehow wipe away years of regulation and policy-thinking and create new frameworks from scratch really only serves the largest industry players more than it serves the rest of us, because it works to delay and provide these actors really significant influence on the scope and direction of policymaking,” Kak added.

Renewed push for a data privacy bill

In testimony prepared for Wednesday, Kak will recommend prioritizing data privacy — in particular, passing “strong, legally enforceable data minimization mandates,” such as the ones included in the American Data Privacy Protection Act (ADPPA).

The ADPPA advanced out of the House Energy and Commerce Committee last year with bipartisan support. It had support from Sen. Roger Wicker (R-Miss.), then the ranking member of the Senate Commerce Committee, but was not brought for a markup due to a lack of support by Senate Commerce Chairwoman Maria Cantwell (D-Wash.). 

The bill would set a national standard for how tech companies collect and use consumer data. It would also give users the ability to sue over violations of the law through private right of action. 

Victoria Espinel, the president and CEO of The Software Alliance, will also urge the committee to advance a comprehensive privacy bill that requires businesses to collect, use and share data in a way that respects consumers’ privacy, gives consumers rights to access and delete their data and ensures companies that violate rules are subject to strong enforcement. 

“The tremendous growth of AI has underscored the importance of these issues. As this Committee has recognized, a federal privacy law will create important new requirements for companies that collect and use consumers’ information, including in connection with AI,” Espinel will say, according to excerpts of her opening statement. 

Kak said that countries with data privacy laws have been able to “act very quickly” in response to privacy concerns after the public release of OpenAI’s ChatGPT and other large-scale AI systems. 

“In stark contrast, I think, while enforcement agencies are doing all they can using existing authorities and fairly limited resources in the U.S., the U.S. has in good contrast been limited in its ability to respond similarly to this moment in the absence of such a law,” Kak said. 

How the government is approaching AI rules

As the House Energy and Commerce Committee kicks off its series of hearings, other work on AI is taking place across the government. The committee will also hold other hearings about the role of AI across “every sector of the economy,” according to an announcement.

And even at the same time Wednesday, a separate hearing is being held in the House Science, Space and Technology Committee to weigh risk management strategies for AI. 

In the Senate, Majority Leader Chuck Schumer (D-N.Y.) held an AI forum in September that brought in tech leaders to discuss the risks and benefits of AI with senators. 

A few proposals on AI regulation in the Senate have emerged addressing different areas of AI risks. For example, a bipartisan proposal introduced last week aims to protect the likenesses of actors, singers and other performers from generative AI technology. 

Another bipartisan proposal unveiled in September by Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) would create a framework for AI regulation, including requiring AI companies to apply for licenses. 

A coalition of civil society and advocacy groups, including the Center for Democracy and Technology, the NAACP and the American Civil Liberties Union, sent a letter to lawmakers Tuesday urging them to consider the current civil rights risks posed by AI, especially for marginalized groups. 

“For the United States to be a true global leader in AI, it must lead in responsible, rights respecting innovation that directly addresses these myriad harms. We hope and expect that future AI Insight Forums, Congressional hearings, and legislation will center these issues and draw on the expertise of civil society and the communities most impacted by these technologies,” they wrote. 

The Biden administration is also taking steps on AI. The administration secured voluntary commitments aimed at managing risks posed by AI from top companies in the field, including OpenAI, Microsoft and Google. 

President Biden also said last month that he would be taking executive action on AI in order to ensure American leadership in the space.

A coalition of Democrats in the House and Senate sent a letter to Biden last week asking him to turn nonbinding safeguards on AI already released by the White House into policy through an executive order.

Kak said there is a chance now for lawmakers and regulators to take action before AI systems are “already entrenched in a particular trajectory.” 

“Now is the moment to emphasize that there’s nothing about the current path that technology is on that is inevitable, and that we need strong data privacy laws and strong competition frameworks to … make sure that the trajectory of AI technologies is shaped in the public interest and not not solely by a handful of corporate actors that are ultimately only going to be driven by commercial incentives,” Kak added.

Tags American Data Privacy and Protection Act Artificial intelligence ChatGPT Data privacy Google Jon Leibowitz Microsoft OpenAI SAG-AFTRA

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Middle ↴
See all Hill.TV See all Video
Main Area Bottom ↴