Here’s how Washington is racing to get a grasp on AI technology

FILE – Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. (AP Photo/Richard Drew, File)

The rapid rise of ChatGPT and influx of artificial intelligence (AI) competitors is leaving the federal government grappling with a range of concerns from the spread of misinformation and a changing workforce, to risks of inherent bias in the technology. 

Lawmakers and regulators are looking to take a unified approach to tackle the rising concerns.

The big picture: The AI arms race is on. Are regulators ready?

The Federal Trade Commission (FTC), Civil Rights Division of the Department of Justice (DOJ), Consumer Financial Protection Bureau (CFPB) and Equal Employment Opportunity Commission (EEOC) put out a joint statement Tuesday pledging to enforce existing laws that aim to uphold fairness and justice as AI is increasingly used across a range of services from housing to healthcare. 

The issues range from enforcing existing laws that aim to address discrimination that could arise as AI is deployed more broadly, to weighing new regulations that set the rules of the road. 

“We have come together to make clear that the use of advanced technologies, including artificial intelligence, must be consistent with federal laws,” said Charlotte A. Burrows, chair of the EEOC. 

The agencies’ joint announcement focused largely on automated systems using AI, rather than the generative AI powered chatbots like ChatGPT. 

Even so, ChatGPT’s skyrocketing popularity, the emergence of a rival tool from Google and other companies getting into the industry — including a new venture from Tesla and Twitter CEO Elon Musk — underscore the time crunch for policymakers.

The two key issues of AI

Alexandra Reeve Givens, president of the Center for Democracy and Technology — a non-profit that focuses on tech policy issues including internet privacy — said there are two key issues from AI.

On one hand, there are the issues related to the spread of disinformation posed by the recent rise in generative AI tools, like the popular chatbots or tools that can create “deep fake” videos. 

On the other, the AI that powers automated systems poses risks of inherent bias that can lead to discrimination. 

“To me it’s incredibly important that policy makers think about both, and that in the recent conversations around generative AI they don’t forget all of the really important work that was happening and needs to continue around the automated-decision-making part of the conversation,” Givens told The Hill. 

Givens said to confront the risks posed by AI, it will take both a combination of Congress weighing new regulations and agencies ramping up how they enforce preexisting laws.

“One of the issues is that even where existing laws apply, it might be hard to enforce those laws because of how AI systems work,” she said. 

For example, algorithmic hiring tools can lead to discrimination, but it would be hard for a worker to know if they are being discriminated against through the system, she said. 

“One of the things agencies need to grapple with isn’t just literally the application of the law but how to deal with the enforcement challenges and understanding how these AI systems have real world effect, and that’s something that we need every agency across many sectors to be looking at right now,” she said. 

Kristen Clarke, assistant attorney general for civil rights, also stressed that lawmakers need to be part of the solution as the agencies aim to ramp up enforcement.

“Artificial intelligence poses some of the greatest modern day threats when it comes to discrimination today, and these issues warrant closer study and examination by policymakers and others,” Clarke said during a Tuesday press conference. 

“But in the interim, we have an arsenal of bedrock civil rights laws that do give us the accountability to hold bad actors accountable,” she added. 

What is Congress doing?

The agencies’ joint announcement followed a proposal unveiled by Senate Majority Leader Charles Schumer (D-N.Y.) earlier this month that would create a framework for AI regulation in a way that aims to increase transparency and accountability. 

The proposed framework is broad, but could be a jumping point for Congress to take action, especially with a push from the majority leader.

In addition to Schumer’s proposal, lawmakers have taken action to press the industry on the risks related to the rise in AI technology. 

Sen. Mark Warner (D-Va.), chair of the Senate Intelligence Committee, sent letters Wednesday to the CEOs of tech firms OpenAi, Scale AI, Meta, Google, Apple Stability AI, Midjourney, Anthropic, Percipient.ai, and Microsoft asking how they are addressing security risks as they develop large-scale AI models. 

“While public concern about the safety and security of AI has been on the rise, I know that work on AI security is not new,” Warner wrote to the companies. 

“However, with the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work,” he continued.

Sens. John Hickenlooper (D-Colo.) and Marsha Blackburn (R-Tenn.) also last week sent a letter to six tech associations, including the Software Alliance (BSA) and the Consumer Technology Association (CTA) about how their members are considering best practices on AI. 

A spokesperson for CTA said the association has been “actively working” with its members to “contribute to the policies, standards, and frameworks around AI.”

“As we do that, we welcome inquiries from policymakers and opportunities for government-stakeholder dialogue. Collaboration will be key to achieving a national policy approach with the protections and flexibilities needed for American leadership on AI,” the spokesperson said in a statement. 

Craig Albright, vice president of U.S. government relations for BSA, said Congress can require companies to have risk management programs and to do assessment risks for high-risk uses of AI — as well as define what a high-risk case would mean. 

“Then companies will need to do impact assessments and design evaluations to make sure that they’re doing what they need to do to root out unintended bias,” he said. 

Tags AI AI Artificial Intelligence Artificial intelligence Google META Microsoft Microsoft OpenAI

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more