Technology

The AI arms race is on. Are regulators ready?

The Microsoft Bing logo and the website's page are shown in this photo taken in New York on Tuesday, Feb. 7, 2023. Microsoft is fusing ChatGPT-like technology into its search engine Bing, transforming an internet service that now trails far behind Google into a new way of communicating with artificial intelligence. (AP Photo/Richard Drew)

The race among tech companies to roll out generative artificial intelligence (AI) tools is raising concerns about how mistakes in technology and blind spots in regulation could hasten the spread of misinformation, elevate biases in results and increase the harvesting and use of Americans’ personal data.

So far tech giants Microsoft and Google are leading the race in releasing new AI tools to the public, but smaller companies and startups are expected to make progress in the field.

This year isn’t the start of the AI boom — both leading companies have been laying the groundwork for launching AI products for years.

Still, Microsoft President Brad Smith has called 2023 an inflection point for AI, comparing it to 2007 for the smartphone or 1995 for the web — the years the new technologies exploded in popularity with the public.

Momentum for generative AI tools sped up in the fall, after the launch of popular tool ChatGPT, which delivers detailed answers to user queries in a conversational tone. 


The AI-powered tool is backed by parent company OpenAI, which received hefty investments from Microsoft before and after its launch.

As ChatGPT’s popularity has skyrocketed, regulators are attempting to to lay down a foundation of ground rules for tech companies already speeding ahead.

As it stands, federal AI guidance is largely voluntary and broad, meaning companies are left to set and follow their own guardrails. 

“Right now in the US there’s sort of a lot of things going on, but it’s very early. I think regulators and policymakers don’t have a good understanding of how some of these technologies work,” said Rayid Ghani, a professor at Carnegie Mellon University.

“They don’t necessarily have a good handle on how to regulate them — whether they regulate the process, whether to regulate what it produces,” he added. 

As that process continues, tech companies are competing neck and neck in the race to be dominant in the AI sector.

Last week, Google announced the launch of Bard, a ChatGPT rival. The company also said it would introduce more AI-powered tools across its search function, including releasing AI-powered blurbs of responses to users’ queries before the traditional links the search engine produces. 

One day later, Microsoft said it would incorporate generative AI into a new version of its search engine Bing.

Long operating in the shadow of the dominant search engine Google, Bing reached 31.7 million visits the day after the announcement, which is 15 percent higher than Bing’s average daily volume for the past six months, according to data estimated by SimilarWeb

The race to be on the cutting edge of the generative AI game poses concerns that the products and services may be risky for users if companies are rushing to “get in front of competitors,” Ghani said. 

“Not that they were robust before in a lot of these dimensions, but now they’re even less so,” he said. 

Some AI may be coming out before it is completely ready

The clearest example of that risk is generative AI tools giving false information in results, or what the industry is dubbing “hallucinating.” 

“In many cases, you would not know that it’s wrong unless you’re an expert yourself or you’re doing separate research on the topic and you’re able to verify the facts that the AI chatbot is advancing,” said Nathalie Maréchal, co-director of the privacy and data project at the Center for Democracy and Technology. 

Even during Google’s demonstration of Bard last week, the tool gave the wrong response.

In a gif showing an example search query for “what new discoveries from the James Webb Space Telescope can I tell my 9 year old about?,” Bard’s response incorrectly said that the telescope took “the very first pictures of a planet outside of our own solar system.”

In fact, the first image of an exoplanet was captured in 2004 by a different telescope, according to NASA’s website.

What happens when AI gets it wrong?

Delivering false information is not a new facet of generative AI chat tools, because incorrect information has always been available in response to search queries on search engines that have existed for decades.

But the shift toward incorporating these generative AI tools into search exaggerates those risks by taking away a level of sourcing for where the answers are coming from, Ghani said. 

As they incorporate these tools into search, Microsoft and Google seem to be leaning toward a process that will allow users to hover over text and see where the information was pulled from.

Those search functions have not been opened to the wider public yet. With ChatGPT, such a feature does not exist and there’s no indication as to where the information is pulled from. 

Maréchal said generative AI chat tools also pose risks over generating “plausible seeming disinformation” more easily and at lower costs for bad actors with the intent to spread false or misleading messages across social media.

That could make it more difficult for social media companies’ detection tools, which are largely powered by other forms of AI, to identify disinformation. 

“Companies should be thinking more carefully through the risks and harms of products before they release them,” she said. 

Regulation of AI is largely voluntary, worrying critics

As for specific regulation on AI, the government’s guidance is mostly optional at this point.

The National Institute of Standards and Technology (NIST) last month released an AI Risk Management Framework. The group said that is intended for voluntary use and aims to improve the ability of companies to incorporate trustworthiness into the design, development and use of AI products and services. 

There are also more agency-specific regulatory actions being built from agencies such as the Food and Drug Administration or the Consumer Financial Protection Bureau. Ghani said those aim to address specific issues related to AI use in those sectors. 

The NIST framework followed a blueprint for an AI Bill of Rights the White House released in October. Complying with the rights laid out by the White House was also voluntary. 

Data privacy and AI are a concern for the White House

One pillar laid out by the White House’s bill of rights was a push for data privacy.

Maréchal said the rise of generative AI tools underscores the need for lawmakers to pass a comprehensive data privacy bill, like the American Data Privacy Protection Act (ADPPA).

But despite bipartisan support, the bill failed to make it through Congress last year. 

“ADPPA wasn’t written with this specific risk category in mind, this particular product in mind. ADPPA will protect people from a wide range of harms, including discrimination on the basis of protected characteristics,” she said. 

Even if models were trained on personal data that was available before new regulation, she said there is a strong “recency bias,” which is when the behavior data that is most useful for generating revenue is what a system uses. 

“The more recent [data] about your behavior is what’s going to be the most relevant,” she said. “Otherwise, we’d all be seeing ads about stuff that we were doing 10 years ago.”

Tech companies say they are following internal regulations when building AI

The lack of strict government regulations on AI as it evolves means much of the work is coming from self-imposed guidelines.

Microsoft said it is following the “Responsible AI Standard” it released in June to guide how it builds its systems. The company says its standards are based on six core principles: fairness, reliability and safety, privacy and security, inclusiveness and transparency and accountability. 

While Microsoft said it uses its self-imposed principles, the company is also in discussions with key D.C. players, with executives in town last week for meetings. 

OpenAI’s CEO Sam Altman was also on Capitol Hill earlier this year and met with lawmakers including tech regulation-focused Sens. Ron Wyden (D-Ore.) and Mark Warner (D-Va.), according to aides for the senators. 

Companies are warning against creating too many AI regulations

Microsoft executives cautioned against D.C. imposing regulations that could slow down innovation. Tech has argued it should be able to go fast with guardrails in place, with the ability to add more as they go. 

The company’s executives have also warned that competitors in China may not slow down simply because U.S. companies are being forced to regulate.

At the moment, Microsoft has said that it is leading the AI race in its partnership with OpenAI, with Google and China’s Beijing Academy of Artificial Intelligence on its heels, according to Microsoft. 

The AI push by companies is raising questions about Chinese competition

Maréchal said that companies framing calls to slow down as a risk that might let China get ahead a “nonsense argument” that is used to “appeal to certain policymakers’ xenophobia.” 

“First of all, innovation is not inherently good because it’s innovation,” she said. “An innovative way to harm more people is not something that we want and China is using various AI tools for mass surveillance and social control, and that is not what we should be moving towards.” 

Other companies, big and small, are also expected to jump more aggressively into the generative AI race. 

Wedbush analysts Dan Ives and John Katsingris wrote in a Feb. 10 report that they believe Microsoft is leading the race so far “Usain Bolt style.”

But both analysts said the race will be a “a long one” that they expect tech giants to spend billions on in the coming years. 

“Now the Street awaits Apple, Meta and others with their next poker moves in this AI Big Tech battle underway,” the analysts added.