The views expressed by contributors are their own and not the view of The Hill

How much restraint is needed with AI?

Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images
This picture taken on Jan. 23, 2023, in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT, a conversational artificial intelligence software application developed by OpenAI.

Artificial intelligence (AI) has become a major disruptor. Our digital society has facilitated its advances, with opportunities to impact every facet of life, including health care, transportation and security. It has also created threats that have prompted some to call for greater restraint on its development and implementation.

The risks have been well defined, including loss of jobs, spreading of misinformation and even the development of highly autonomous weapons. A group of technology leaders has called for a pause in certain levels of AI development (with capabilities up to OpenAI’s GPT-4), claiming that some AI systems may pose “profound risks to society and humanity.” When Geoffrey Hinton left his position at Google because of his concerns about AI, this brought even more attention to these potential risks. 

Chatbots have garnered significant attention, providing human-like interactions that would score well on the Turing Test, the procedure proposed by Alan Turing to assess whether a computer system appears human-like in its communication.  

ChatGPT has been at the forefront of such advances, with concerns raised in several domains, particuarly education. For example, ChatGPT was able to pass a bar exam taken by lawyers and score above the median on the MCAT exam used for admission to medical school.

The risks of chatbots have been well documented. They can generate misinformation through social media and other communication vectors. They can be harnessed during political campaigns to sway voters with propaganda. They can foment social unrest with targeted messaging that can incite angst and even responses that create societal dangers and harm.

But the genie is already out of the bottle. Attempting to pause or restrain such advances is futile. The more salient issue is how we learn to live with AI systems that have not even reached their full level of capability.

Placing restrictions on AI advances in the United States makes no sense. Though corporations and the government are making significant AI investment, other countries, including some that are not friendly with us, are moving full speed ahead. The AI arms race is in full gear, with no well-defined endpoint. If our nation or our allies do not lead AI development in the world, other countries who may use such forces for nefarious purposes could gain an upper hand.

So what can our nation do to simultaneously restrain AI development that can be harmful while encouraging AI development for positive outcomes?

Much like cybercriminal activity, the ideal place is to stop it at its source, which is near impossible. The next best place is educating users so that they do not fall victim to such activities.

AI systems have the potential to act as Trojan horses: once they infiltrate some entity, they can wreak havoc and destruction. Yet stopping AI system development carries with it more risk than allowing it to develop untethered. This is because the people who will respond to such calls are not the people, organizations and entities that need to be stopped. And such bad actors are unlikely to listen to any calls for moderation and restraint.

By advancing and accelerating AI system development, not only will new capabilities be achieved, systems to counter such capabilities will emerge. This will create “checks and balances” that over time will provide the necessary guardrails that any pauses will most certainly not achieve.

Creating AI rules of conduct are appropriate and necessary. Conducting responsible AI system development is worthy of discussion and debate. Progress, not pause, is the path forward to achieve success and find a safe zone for AI systems.  

Sheldon H. Jacobson, Ph.D., is a professor in Computer Science at the University of Illinois Urbana-Champaign. A data scientist and operations researcher, he applies his expertise in data-driven risk-based decision-making to evaluate and inform public policy.  

Tags AI Artificial intelligence ChatGPT Elon Musk Geoffrey Hinton misinformation National security OpenAI Sheldon H. Jacobson

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴

Daily News

Hunter Biden's SECOND TRIAL Set To Begin, Prosecutors Look To Bring Addiction Back Into Spotlight

Hunter Biden's SECOND TRIAL Set To Begin, Prosecutors ...
RFK Jr tells Roseanne Barr he staged dead bear cub ...
Kamala Harris's VP shortlist narrows
Harris, Trump court voters in Georgia as they stand ...
More Videos
See all Hill.TV See all Video
Main Area Bottom ↴

Most Popular

Load more