The views expressed by contributors are their own and not the view of The Hill

There’s no such thing as pausing AI research

FILE - Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. (AP Photo/Richard Drew, File)

In March, an open letter from the Future of Life Institute called for a pause on AI research on models larger than GPT-4, which powers tools such as ChatGPT. The signers, which included some of the biggest names in tech and academia, stipulated that if a pause cannot be enacted quickly, governments should step in and impose a moratorium.

Some believe that this letter was just a ploy to grab attention and, in some cases, to drum up interest in business ventures. Whatever the intention, the outcome has been greater confusion around the topic of AI safety. In truth, the demand for such a pause is questionable, and the pause itself impractical.

Yes, we need to have a sober conversation about the realities and risks of artificial intelligence. But still, let’s look at the case for a pause and why it ultimately could prove to be insignificant or even counterproductive.

Concerns about OpenAI and its current dominance in the AI space make sense. When GPT-4 was released in March 2023, much of the transparency that researchers had come to expect went out the window. OpenAI chose to keep details about datasets, methodology, architecture and even the size of GPT-4 from the public, citing concerns over safety and competitive advantage. But GPT-4 wouldn’t have been possible without the many prior discoveries and innovations openly shared by researchers in the field.

Although the letter called for transparency, pausing the creation of more powerful models without a reversal of OpenAI’s decision to keep details proprietary would leave us as much in the dark six months from now as we are currently.


The letter specifically addressed malicious uses of language models such as their potential for creating disinformation. In January 2023, I published research on this very topic that concluded GPT-3-scale models are already capable of being used to create content designed for malicious purposes, such as phishing, fake news, scams and online harassment. Therefore, pausing the creation of GPT-5 wouldn’t prevent any of that misuse.

Another potential reason for the pause came from worries about robots gaining true intelligence, sparking fears of Skynet or some other dystopian science fiction result. A paper published by Microsoft entitled “Sparks of Artificial General Intelligence: Early experiments with GPT-4” described a number of experiments that illustrated emergent properties within the model that could be considered a step toward machine intelligence.

These experiments were performed on an internal version of the GPT-4 model. That model hadn’t undergone what’s called fine-tuning, a process whereby a model is trained to become safer and more accurate. Researchers discovered, however, that the final model, the one that is available to the public, cannot be used to reproduce all the experiments described in the paper. It appears that the fine-tuning process breaks the model in some fashion, making it worse for applications where creativity and intelligence are required.

But again, those facts will be difficult to determine without access to the original model. We could be close to sparking true artificial general intelligence, but we’ll never know because only OpenAI has access to this significantly more capable model.

The letter also fails to acknowledge the AI-related problems we already face.

Machine learning systems are already causing societal harm, and little is being done to address those problems. Recommendation algorithms that power social networks are known to drive people to extremism. Many obvious issues have also been raised around algorithmic discrimination and predictive policing. How can we begin to solve long-term AI-related issues if we can’t even face the real-world problems we’re dealing with right now?

The letter specifically states, “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.” One of the ways of doing this is called “alignment,” a process most easily described as creation of an artificial conscience.

But given that we can’t always agree on our own human values, if we’re banking on humans agreeing on what ethical values an AI should have, we’re in big trouble. And this effort is essentially what some of the group who signed the letter want us to invest in: to impart author Isaac Asimov’s three laws of robotics to bar robots from harming humans. If you understand AI, you’ll understand why this isn’t possible.

But everyone knows there’s no stopping innovation, even if there are people advocating for kinetic warfare against scientists building GPU clusters. Even if the entire world agreed to halt all AI research by threat of force, technology would still progress. Computers will eventually get powerful enough to the point that ordinary people will be able to create artificial general intelligence in their garages. And while I am just as concerned as the next person about malicious actors creating evil AI, pausing GPT-5 isn’t going to affect the probability of that happening in the slightest. In fact, further research into alignment could even provide additional tips and tricks for those looking to create an evil AI.

There is also a great case for optimism about superintelligence. Yes, an evil AI that kills all humans or turns them into batteries is a great plot for a movie, but isn’t an inevitability.

Consider that the universe is over 13 billion years old and likely hosts an unfathomably large number of inhabitable planets. It is likely that many alien civilizations already reached the point we’re at today and pondered AI safety, just as we’re doing now. If artificial superintelligence inevitably leads to the extinction of its host species and then spreads exponentially throughout the universe, shouldn’t we all be dead already?

I asked GPT-4 to present theories about this conundrum.

Aside from other obvious explanations, such as the fact that an errant superintelligence may not have reached us yet due to the vast distances between stars and galaxies, GPT-4 made one other interesting suggestion. It hypothesized that the extinction of our species may ultimately be caused by a war between human factions bickering about AI safety.

Andy Patel is a researcher for WithSecure Intelligence. He specializes in prompt engineering, reinforcement learning, swarm intelligence, NLP, genetic algorithms, artificial life, AI ethics and graph analysis.