The views expressed by contributors are their own and not the view of The Hill

Don’t fear the AI. Expose it. 

Getty Images

On Nov. 30, 2022, ChatGPT was released to very little fanfare. Today, it is a hot topic in the media and politics, the subject of hand wringing among any number of sectors fearing inaccurate, harmful, or otherwise unethical results. At the G-7 Summit May 21-23, global leaders paid special attention to AI, committing to work with technology leaders to ensure safe and trustworthy AI, and in June, prompted by public ChatGPT concerns, Senate Majority Leader Chuck Schumer (D-N.Y.) called for the U.S. to regulate AI. On July 13, the Federal Trade Commission announced its investigation of the headline-grabbing technology.  

ChatGPT seems to have captured public attention by its timing and its features. It promises to automate much of the work many of us procrastinate doing, from writing reports and documents to conducting research. It has made us pause and question what the future of work, school, even our personal lives will be if we are relying on AI to do it.  

How will we assess when an employee is performing their job duties effectively? Will lawyers trust the output of ChatGPT for its accuracy and confidentiality when using it to draft briefs or conduct relevant research for a case? Which professors will believe that a well-written paper from an underperforming student is actually a student’s work? Will content creators have any rights regarding their work? Can AI influence your political beliefs? Did your lover really write that sonnet?  

While technology leaders have called developers to halt the progression of AI, researchers in AI have simultaneously reinforced and questioned AI fears. How harmful is AI, really?  

The problem is that not all AI is alike, but many types of AI are hidden from public scrutiny. The focus on ChatGPT, a generative AI, has caused us to forget the many other AI, which can cause very serious harm. High-risk AI often adopts a “black box,” where how the AI makes its decisions cannot be observed. AI black boxes fly airplanes, guide surgical robots, power hearing aids, are integrated into cars, guide military drones and weaponry, diagnose cancer, streamline complex manufacturing, power diabetes therapy and insulin pumps, navigate space shuttles, optimize agriculture production, and will power long-distance trucking, to name just a few. AI are part of literally every sector. 

Almost no government requires public disclosure of any detailed AI information. Even the European Union’s proposed AI Law does not mandate detailed functionality disclosure to the public, though it does require organizations to disclose its existence, stop using high-risk AI, and conduct risk assessments. The People’s Republic of China’s Cyberspace Administration of China, the agency responsible for enforcing the PRC Cybersecurity Law, has also issued regulations taking a fundamentally different approach. The CAC requires any submission for security review, a mandatory licensing schemes, and obligations to “adhere to core socialist values,” anticipating the power of generative AI to influence users. These steps are not unexpected; they are consistent with existing obligations under the PRC Cybersecurity Law. 

For most countries, including the U.S., we can’t examine AI functions because we don’t even know when they are being used. While ChatGPT and other generative AI pose risk to the U.S. and our residents, a variety of other AI pose even more risk both because of how it’s used but also because we can’t even test it. For example, AI that has a physical effect on the human body or physical property create harms distinct (and arguably more severe) than generative AI. Because most AI systems are designed with a common interconnected technology, and with cyberattackers increasingly using AI-enabled attack vectors, compromised AI could harm thousands of people simultaneously. 

AI systems can fail because of a manufacturer’s faulty design and testing, including by encoding bias or failing to include representative data into the data sets that create algorithms. For example, failing to incorporate information about the terrain on which a smart combine will harvest wheat could lead to property damage or damage to the machine itself. When medical devices are not trained on the populations that will use them (including age, ethnicity, race and bodily variations), it is possible they will be unsafe for some populations or all populations to use, potentially malfunctioning in the body, during a risky surgery, or by misdiagnosing a patient.  

While ChatGPT could seed disinformation and disrupt how people interact within and across organizations, there are ways to neutralize these issues and use ChatGPT advantageously. While there is something deeply uncomfortable about accepting that human intelligence is limited, that looking inside the “black box” may not really be possible, examining the inputs and outputs of such systems can reveal a lot of things about the safety and fairness of such systems. The interface can be inspected by the general public, competitors, and regulators like the FTC. This means that ChatGPT’s harms can be identified through use and, over time, should improve.  

Non-generative AI, however, is not similarly open. These AI harms are not visible or testable, harms that may be blanketed in technical complexity and legal protectionism, frequently overlooked or underappreciated by regulators. In 2021, I proposed that the U.S. consider requiring a public version of complex AI algorithms be hosted for public testing and inspection. Although risk assessments and agency review could be useful, ChatGPT has shown us that widespread use and testing has, by far, revealed important truths about its functionality.  

The sooner we realize ChatGPT is but one of a far larger ecosystem of AI, an ecosystem that requires far more disclosure and inspection than ever before, the safer and more ethical our global society will be. 

ChatGPT has not been used in the writing of this article. 

Charlotte Tschider is Associate Professor, Loyola University Chicago School of Law.

Tags Artificial intelligence ChatGPT Chuck Schumer

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more