The views expressed by contributors are their own and not the view of The Hill

US companies must consider AI safety for the rest of the world

President Joe Biden speaks about artificial intelligence in the Roosevelt Room of the White House, Friday, July 21, 2023, in Washington, as from left, Adam Selipsky, CEO of Amazon Web Services; Greg Brockman, President of OpenAI; Nick Clegg, President of Meta; and Mustafa Suleyman, CEO of Inflection AI, listen. (AP Photo/Manuel Balce Ceneta)
President Joe Biden speaks about artificial intelligence in the Roosevelt Room of the White House, Friday, July 21, 2023, in Washington, as from left, Adam Selipsky, CEO of Amazon Web Services; Greg Brockman, President of OpenAI; Nick Clegg, President of Meta; and Mustafa Suleyman, CEO of Inflection AI, listen. (AP Photo/Manuel Balce Ceneta)

Last week, Sen. Chuck Schumer (D-N.Y.) hosted an Artificial Intelligence Forum, convening some of the nation’s top AI experts to begin charting a path toward safer AI. Last month, U.S. tech companies took a helpful step toward building meaningful safeguards for generative AI. 

In partnership with the White House, several of the corporations currently leading the field opened their large language models for a public red teaming event at DEF CON 31, a hackathon-esque approach to pressure-testing AI models to uncover how they might go off the rails. Ideally, this will lead to more robust AI models that can reliably not go off the rails.   

The whole world is taking note of what generative AI can do. Chatbots in particular have proven noteworthy thanks to their ability to interact naturally with people to process instructions and carry out tasks, easing the burden of administrative tasks or more creative-leaning efforts like writing a blog post or drafting functional computer code. 

But they are notoriously faulty for non-English languages, especially those considered lower-resourced — a term that applies to languages spoken by most of the world. Relying on chatbots to translate text from English to French is reliable and seamless. Shifting the conversation into Amharic or Malay? Be prepared for a ramp-up of both incorrect translations and made-up “facts.” This is more than just an inconvenience; it means tools like ChatGPT, Google’s Bard and others will naturally behave more unpredictably and pose new types of safety risks when exported worldwide.  

It is good that Congress and the Biden-Harris administration are paying attention and looking for consistent ways for companies to accept a more open approach to red teaming. It’s good that they’re pushing for public commitments to safety and security. But as momentum builds to tackle some of these core challenges of generative language models, the U.S. must live up to Secretary of State Antony Blinken’s recognition that it has “a special responsibility because the companies that are leading the way on AI … are American companies.”  This means defining safety, security and trust not just for an American context, but a global one.  

Systems like ChatGPT are built, first and foremost, for and by Western English speakers. Western-biased safety exercises will catch language promoting anti-Semitism or gender violence and build associated safeguards accordingly. But the average U.S.-based safety researcher helping to impose guardrails on these generative systems will be unaware of the many ways that ethnic slurs are used to sow division in Kenya or the dehumanizing power of racial slurs in South Africa. Safeguards aspiring to be watertight for harms understood in a Western context will function as a sieve for a majority of the world’s population.   

We’ve long seen how social media platforms’ content-boosting practices can result in disproportionate harm for countries where companies have opted not to invest in staffing up local content moderation teams or empowering local rights advocates to flag harmful content. If companies’ safety support teams aren’t appropriately skilled to understand local nuance and act swiftly to address the risks, language that incites violence goes unchecked, having tragic consequences. But despite the widespread ills accompanying tech-fueled misinformation on its platform, in 2021 Facebook still dedicated just 13 percent of its misinformation budget to countries beyond the U.S. and Western Europe. In Afghanistan, the company’s anemic safety efforts led to just 1 percent of hate speech being removed from the site. 

The lesson lawmakers continue to resist learning is that allowing companies to triage the development of safety measures based on bottom-line impacts simply does not lead to safety for those in the so-called rest of the world.  

The way companies have demonstrated their desire to address the risks associated with large language models in the U.S. — iterating with commendable speed as new failure modes are identified — has been reassuring. When companies engage the public to ensure safety is prioritized, we all are better off.   

Voluntarily building safeguards makes good business sense for these companies today. Public trust in the biggest markets is a key factor in whether the technologies will lead to profit. But these incentive structures break down as soon as we look beyond the largest markets. Put simply, market dynamics alone will not ensure the safety of these tools on a global scale. Absent oversight, corporations will just not have the incentives.   

Every country and region will inevitably contend with what generative AI systems like ChatGPT mean for them individually. But it is American multinationals that are unleashing generative AI on the world. As U.S. lawmakers roll up their sleeves and begin their work to build an appropriate regulatory response, it’s critical they recognize that safeguards are needed at a global scale and additional incentives (or disincentives) will be needed to ensure corporations build for safety on a global stage.    

Even when expressly aiming for AI that benefits humanity, companies have opted to preferentially benefit the portion of humanity that comprises Western society, while exploiting international workers in the generative AI supply chain — those tasked with ensuring tools like ChatGPT were trained on “safe” data. Why should we trust that they will do better when it comes to developing safeguards for these tools’ global proliferation? 

If U.S. policymakers take seriously their responsibility to curb the harm that generative AI is capable of sowing worldwide, they can help to ensure that these companies do the hard — but very necessary — work of prioritizing safety for all, not just a few.

Aubra Anthony is a senior fellow in the Technology and International Affairs Program at Carnegie, where she researches the human impacts of digital technology, specifically in emerging markets.

Tags Antony Blinken Artificial intelligence artificial intelligence regulation Chuck Schumer generative artificial intelligence Joe Biden Kamala Harris Politics of the United States Regulation of artificial intelligence

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more