The views expressed by contributors are their own and not the view of The Hill

Don’t let tech companies use us as guinea pigs

OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)

“Gradual iterative deployment” is a phrase that OpenAI CEO Sam Altman used a couple of times in his opening remarks at the company’s recent developers conference. What do these words mean, exactly? On closer inspection, not much, especially as an effective way to address the fundamental safety issues presented by artificial intelligence.

The first time Altman used this phrase, he said, “At OpenAI, we really believe that gradual iterative deployment is the best way to address the safety issues, the safety challenges with AI.” Altman acknowledges that OpenAI’s products have safety issues and challenges, which are best addressed by the company releasing its products repeatedly and gradually. But how is the repeated and gradual release of AI products supposed to address safety issues or challenges? That remains unclear.

The second time Altman used the phrase “gradual iterative deployment” doesn’t help answer the question, either: “As I mentioned before, we really believe in the importance of gradual iterative deployment. We believe it’s important for people to start building with and using these agents now to get a feel for what the world is going to be like as they become more capable.”

In other words, people will continue to use OpenAI’s products as those products are released, repeatedly and gradually. Presumably, if there are any safety issues or challenges with those AI products, OpenAI will address them. But Altman did not say as much — nor did he say how the company will address such safety issues or challenges.

The phrase “gradual iterative deployment” may sound nice, but it is utterly empty as a safety measure. The repeated and gradual release of products with known safety issues does nothing to address those issues.


Imagine that a pharmaceutical company that manufactures a drug with known safety issues issued this announcement: “At XPharma, we really believe that gradual iterative deployment is the best way to address the safety issues and challenges with XDrug. We really believe in the importance of gradual iterative deployment. We believe it’s important for people to start using our drugs now to get a feel for what the world is going to be like as these drugs become more effective.”

No one would take such a statement seriously. Pharmaceutical companies must make a case for the safety and effectiveness of their products before they are allowed to sell them for general use. Drug companies cannot simply use people as guinea pigs to test their products in real time rather than in a controlled, experimental setting where safety issues can be contained and addressed promptly.

In clinical research, randomized controlled trials are the gold standard for establishing the safety and efficacy of new treatments. In a clinical trial, an experimental treatment, such as a new drug, is tested on a cohort of patients who give their voluntary informed consent to participate. When a treatment is shown to be safe and effective after a few phases of a clinical study, that treatment can be approved for marketing. Even then, fourth-phase trials are conducted to establish the treatment’s safety and efficacy in a larger population as well as to study its long-term effects.

So why are we allowing tech companies like OpenAI to use all of us as guinea pigs without our consent? Why are we letting them test their products on us before they are proven safe and effective?

We know that digital products, especially AI-powered ones, can be bad for people’s physical and mental health. Tech companies like Meta, ByteDance and Google are facing lawsuits from dozens of states alleging that their AI-powered platforms — Instagram, TikTok and YouTube, respectively — are liable for “depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes” for young users.

Like pharmaceutical companies, the burden of proof should be on tech companies to show that their products are safe. Before they can unleash their products on society at large, tech companies should be required to experimentally test their products on a cohort of informed volunteers in a controlled, experimental setting in order to establish safety and efficacy.

Moti Mizrahi is a philosophy professor at the Florida Institute of Technology. His research is on the philosophy and ethics of science and technology.