The views expressed by contributors are their own and not the view of The Hill

Why the Senate may be accidentally giving carte blanche to unethical AI?

OpenAI CEO Sam Altman
Greg Nash
OpenAI CEO Sam Altman answers questions during a Senate Subcommittee on Privacy, Technology, and the Law hearing to discuss oversight of artificial intelligence on Tuesday, May 16, 2023.

OpenAI’s CEO, Sam Altman, was at a Senate subcommittee hearing recently. This was a moment many were waiting for, and to their credit, the senators seemed to have done their homework and fostered a critical and constructive dialogue. They summarized and voiced concerns raised in mainstream and scholarly publications and formulated challenging questions. Among other remarks, Sen. Richard Blumenthal (D-Conn.) noted “… There are basic expectations common in our law. We can start with transparency. AI companies ought to be required to test their systems, disclose known risks and allow independent researcher access…” (YouTube, 1:10:16). There is a lot to unpack in this requirement, mainly regarding the scope of testing which remains unspecified and ambiguous. 

The scope of what most of us (and perhaps the senators) consider relevant to testing seems substantially more limited than what tech companies understand and carry out under the banner of testing. For instance, let’s take Meta as an example. Ironically, in an interview conducted by Sam Altman in 2016, Meta’s CEO, Mark Zuckerberg, was asked about how “innovation culture” was created in Facebook? Zuckerberg responded: 

“…We invest in this huge testing framework. At any given point in time there aren’t [sic] just one version of Facebook running in the world, there’s probably tens of thousands of versions running because engineers here have the power to try out [an] idea and ship it to maybe 10,000 people or 100,000 people and then they get a readout on how that version of what they did, whether it was a change to show better content a newsfeed or UI [user interface] change or some new feature, they get a read out on how that version performed compared to the baseline version of Facebook that we have on everything that we care about: On how connected people are, how much people are sharing and how much they say that they’re finding meaningful content, business metrics like how much revenue we make and engagement of the overall community.” (YouTube, 09:37) 

Clearly, “testing” could imply validation and confirming the reliability of a system to improve quality, but as Zuckerberg explained, what tech companies also label as testing involves conducting research and generating new knowledge about how each user (or groups of users) engages with their platform. This knowledge allows tech companies to make meaningful conclusions about users’ habits, lifestyle, social class, mental health or many other indicators of personality. They could use variables including the time, regularity or length of use, types of engagement with content compared with other users, used devices and networks and many other indicators that do not require knowing a user’s identity. Indeed, as Zuckerberg explains, the goal could be increasing engagement (a euphemism for addictiveness in the tech industry) or boosting company’s revenues, and as reports show (see Statista social networking report and SporoutSocial report), both goals have been consistently achieved. 

Now that both the EU and the U.S. are developing laws to regulate artificial intelligence and generative AI, the time is ripe to close this loophole and prevent high tech companies from conducting unethical practices. Similar to what Zuckerberg noted about Facebook, we might end up with thousands of versions of ChatGPT that offer widely different responses based on different variables, and this could allow OpenAI to conduct large social experiments under the banner of testing. While currently we do not exactly know what kind of information is collected about ChatGPT users and how or where this information is processed, one can imagine that typing speed, complexity of used prompts, frequency and duration of use or attempts to regenerate responses could reveal numerous hints about user’s personality, IQ, education level, language skills and digital literacy. What OpenAI is doing with this information is unknown, but tech companies have sold user information in the past in ways that rendered the larger society vulnerable (e.g., Cambridge Analytica Case). Indeed, when testing generates new knowledge about users, it makes them vulnerable, thereby giving platform owners the upper hand to benefit from them. Nevertheless, because these practices are labeled as testing, they are currently not treated as research and there are good reasons why tech companies want to keep it this way. 

Conducting research on individuals or groups has specific requirements, one of which is obtaining consent. Normally companies get around this requirement by means of deidentifying data or adding blanket statements along the lines of ‘We use your data to improve your experience’ or ‘We use your data to enhance our systems’ to their terms and conditions. While it might be true that users (often carelessly) accept terms and conditions, in the case of OpenAI, given Congress’s stipulation that testing is required without specifying the scope, requirements and limitations of testing; this could be seen as offering a free pass to do any kind of testing. If you still think this is OK, a comparison would be illuminating here: In universities, before conducting research with humans (even an anonymized online survey) researchers are required to I) be certified in research ethics and integrity, and II) submit an application to the Institutional Review Board (IRB) to receive their stamp of approval. The former requirement aims to ensure that researchers are aware of ethical guidelines and respects the rights, integrity, and privacy of the participants. The latter is a formal process to have the research protocol reviewed by a panel of experts to ensure that research is conducted in a responsible manner and risks are minimized. Despite these measures, various unethical practices happen in academic environments, some of which are investigated and result in sanctions. In commercial contexts, however, as Zuckerberg nicely enunciated, engineers can experiment with all kinds of features and create a wealth of knowledge about users without any oversight or approval, and yet, this blatant lack of oversight is hailed as the secret sauce for success and promoted as “innovation culture.” 

To sum up, the demand from Congress for OpenAI and other tech companies to rigorously test their systems without specifying the scope of testing or how test results are used and shared underscores a stark vagueness that endangers all of us. We should demand the new proposed laws on artificial intelligence to specifically clarify the scope of testing and mandate tech companies to put in place ethical oversight mechanisms similar to those that exist in academia. 

Mohammad Hosseini is a postdoctoral researcher in Research Ethics and Integrity at Northwestern university. He has written three published peer-reviewed articles on how to tackle challenges of using GenerativeAI in scholarly writing, in peer review, and the ethics of disclosing usage and a preprint currently under review in PLOS One.

Tags Mark Zuckerberg Richard Blumenthal Sam Altman

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more