The views expressed by contributors are their own and not the view of The Hill

Three necessities to regulate ever-evolving artificial intelligence

(AP Photo/Matt Slocum, File)
FILE – Alondra Nelson speaks during an event at The Queen theater, Jan. 16, 2021, in Wilmington, Del. On Tuesday, Oct. 4, 2022, the Biden administration unveiled a set of far-reaching goals to align artificial intelligence-powered tools with what it called the values of Democracy and equity, including guidelines for how to protect people’s personal data and limit surveillance. “We can and should expect better and demand better from our technologies,” said Nelson, Deputy Director for Science and Society at the Office of White House Science and Technology Policy. (AP Photo/Matt Slocum, File)

In 1950, the British mathematician Alan Turing asked in a paper if computers can think. In 2022, AI “went mainstream” and more of us started to ask that same question.  

As it turns out, despite impressive AI advances, the answer to Turing’s question is not an easy yes or no. AI developments brought wonderful new tools that can make life more fun, work more efficient and create more jobs. However, they also came with a number of problems: privacy violations, behavioral manipulations, monitoring at work and easy propagation of false information, just to name a few. All need proper government regulation.

Which brings me to the next question: Can governments regulate thinking computers? The answer is yes — if three conditions are met.

First, we need to better educate the public, politicians and bureaucrats about AI. In a democracy, a policy needs public support to be backed by politicians. AI regulation is unlikely to have strong public backing if voters do not understand how AI affects them directly. 

According to a Pew Research Center survey conducted at the end of 2021, only 37 percent of Americans were more concerned than excited about the increased use of AI in daily life. Out of these, only 2 percent were concerned about the lack of oversight and regulation. These numbers are very small and indicate that people are unaware that automatic hiring can lead to discrimination, that governments can scan the faces of participants at music festivals and check the faces against crime databases, that bots are cheap and can be used for disinformation or that robots in the workplace contribute to burnout and job insecurity

The public needs to understand both the benefits and the risks of AI and support policies that benefit them. There are good practices in the world for approaching education about AI at a large scale. For example, Finland started a program to train citizens in its basics.

Politicians need to understand these issues well. It was embarrassing that a U.S. Treasury secretary declared in 2017 that the loss of human jobs from AI was not even on the government’s radar. While AI can create jobs, it can destroy some, especially as the nature of work changes with automation. Officials need to educate themselves on the topic. AI affects all areas of life, and it is unacceptable to be ignorant about this matter while holding public office.

Many government bureaucrats are uninformed as well. It is partially a matter of regulatory agencies being underfunded. Indeed, governments can make use of outside experts for complicated legislation. For example, feedback from leading Oxford experts led to the identification of serious problems with the ethical regulation of AI in the United Kingdom government approach. Even if regulations end up being written by top experts in the world, the implementation of the legislation is still done in agencies with bureaucrats who need to understand well the technical aspects. 

Some countries have already started offering diplomats and tax administration officials training in AI. Such a course might be helpful for the employees of the NY Department of Education, who somehow got the impression that banning ChatGPT, an artificial chatbox that can produce essays, from the school’s networks and devices, will make the problem go away. It will not. Banning is not a solution!

Second, countries need designated agencies to prepare this AI regulation and conduct studies regarding the future of AI. These agencies need well-trained experts and proper financing. Such an institution, called the Office of Technology Assessment, existed in the U.S. until 1995. In the United Kingdom and Germanysuch institutions still exist.  

Governments are in dire need of experts to map out how technology might change, predict what can go wrong, and regulate early and often. This includes scenarios of how a superintelligence could not be controlled by humans and could act to boost its own intelligence and acquire resources for its own use (yes, the “Terminator” scenario). While some people might argue, like Elon Musk, that it is “the biggest risk we face as a civilization,” others, like Mark Zuckerberg, claim these warnings are irresponsible. 

Even if the odds that something like this might happen are small, given the magnitude of the negative outcome, it is imperative to prepare. We do this in other areas, such as planetary defense from a possible asteroid impact. The probability of being hit by an asteroid the size of the one that killed the dinosaurs is 0.000001%. NASA recently aimed DART (Double Asteroid Redirection Test) at a small body in space and altered its course to prepare for the off-chance that such a body might be on a collision course with Earth.

Third, governments need to move faster. In the race against machines, governments are bound to lose if they move at their usual bureaucratic speed. Governments are slow, sometimes for good reasons. In democracies, checks and balances, procedures and even bureaucracy reduce the odds that bad decisions are being taken. While this is helpful when dealing with a potential despot, it is bad for dealing with fast developments. AI develops fast. 

2022 study documents the rise of AI over time. In Turing’s time, Theseus, a small robotic mouse was able to navigate a simple maze. In 1992, TD-Gammon learned to play backgammon. In 2020, GPT-3 produced text that is indistinguishable from human writing. And in 2022, Minerva, a language model, was able to solve complex mathematical problems. 

Getting back now to governmental speed, it took over 60 years from the first telegraph message for broadcasting to be regulated through the Radio Act of 1927 and 90 years for the Federal Communication Commission to be created. During the first 60 unregulated years, the first telephone call was made, radio was invented and monopolies were formed. So, basically, the government tries to catch up with technology by running on foot after a high-speed train. 

Some countries are moving faster though. China’s Cyberspace Administration has already regulated deep fakes. But China is not a democracy and does not have the usual checks and balances that a democracy possesses. The European Union (a group of 27 democratic countries) has a promising regulatory proposal on the table that might get adopted in mid-2023, in the most optimistic scenario. The EU needs to stop running and catch a train on the way to AI regulation.

AI will continue to evolve and change all our lives. To ensure that it improves our lives, we cannot continue to operate in the current “Wild West.” Stoics might say: Do not postpone, tomorrow is not guaranteed. In the case of AI: Act now, or you might not have a chance tomorrow!

Ioana Petrescu is a senior research fellow at the Harvard Kennedy School of Government. She is a former finance minister of Romania.

Tags Alan Turing Artificial intelligence ChatGPT facial recognition Machine learning Politics of the United States tech regulation

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more