Technology

Open AI exec warns AI can become ‘extremely addictive’

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot, according to reports in the Washington Post and the New York Times. (AP Photo/Michael Dwyer, File)

OpenAI chief technology officer Mira Murati urged the conducting of close research on the impact of artificial intelligence (AI) technology as it advances to mitigate risks of it becoming addictive and dangerous. 

Murati, a top executive at the company behind the popular ChatGPT AI tool, warned during an interview Thursday at The Atlantic Festival that as AI advances, it can become “even more addictive” than the systems that exist today. 

Companies are introducing features that have longer memory or more capability for personalization, which will produce results more relevant to users, she said. 

ChatGPT has already advanced since the first form was released to the public. On Monday, the company announced it is bringing a voice mode to the tool, which will let users engage in a conversation with the chatbot on the go. 

“With the capability and this enhanced capability comes the other side, the possibility that we design them in the wrong way and they become extremely addictive and we sort of become enslaved to them,” she said. 


To avoid that, she said researchers have to be “extremely thoughtful” and study how people are using them as systems are deployed to learn from “intuitive engagement” with users. 

“We really don’t know out of the box. We have to discover, we have to learn, and we have to explore. There is a significant risk in making them, developing them wrong in a way that really doesn’t enhance our lives and in fact it introduces more risk,” Murati said. 

Since ChatGPT launched nearly a year ago, it skyrocketed in popularity and has been integrated into Microsoft products.

Other companies, like Google, Amazon and Meta, have since announced and released large language models, creating an AI arms race — and leaving lawmakers racing to regulate the technology and the risks that come along with it. 

One risk lawmakers have been considering is the spread of misinformation from AI, from when the systems produce “hallucinations,” or inaccurate results. That risk could be especially concerning during elections. 

Murati said she doesn’t think it is realistic to imagine a “zero risk” situation, but the goal is to minimize levels of risk while maximizing the benefits it poses. 

“I think about it in terms of trade-offs. How much value is this technology providing in the real world and how much we mitigate the risks,” she said. 

One of the most immediate challenges that was highlighted from the launch of ChatGPT was students using it for schoolwork, in some cases to cheat. Murati said the technology will require adapting to new ways to teach, and highlight new ways of learning. 

Another key concern lawmakers have been considering is the threat the technology poses to jobs. Murati agreed that those threats are real. 

She said the technology will call for “a lot of work and thoughtfulness” to address those risks. 

“Just like every major revolution I think a lot of jobs will be lost, probably a bigger impact on jobs than any other revolution. And we have to prepare for this new way of life,” she said.