The views expressed by contributors are their own and not the view of The Hill

Fiddling while AI burns

As Congress focuses on what to do about TikTok, leaders in the tech industry have cautioned against a far more expansive risk that has a direct bearing on the role of tech in our daily lives. Data gathered by firms such as TikTok underpin advanced AI tools like the commercially-available GPT-4 large language model (LLM) and support research into next-generation models like GPT-5 and beyond.

A host of prominent technologists and academics recently released an open letter on TechCrunch urging a six-month pause on research into all AI systems more powerful than GPT-4, underscoring that the AI systems currently being developed may lead to “powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.” The letter reflects how American tech oversight lags behind other countries.

U.S. digital oversight focuses on piecemeal solutions to expansive issues. Such an approach has long financially benefited the U.S. tech sector — firms in the U.S. grew quickly without comprehensive oversight. However, in the context of AI, building national and global frameworks for corporations and researchers is not merely a question of protecting against corporate exploitation. It is an area of existential concern for all users.

Other countries and regions provide examples, however imperfect, of ways to offer more comprehensive data protection frameworks to protect users and communities. In the European Union, the Digital Services Act, passed in 2022, seeks to manage the systemic effects of data gathered by online intermediaries and platforms to protect the public interest. In Korea, three laws — the Personal Information Protection Act, the Act on Promotion of Information and Communications Network Utilization and Information Protection, and the Credit Information Use and Protection Act — were amended in 2020 as the Personal Information Protection Act to protect and guide the use of personal data commercially. Even in China, the 2022 draft regulation on recommendation algorithms creates requirements that tech firms do not “endanger national security or the public interest.”

The European Union’s proposed AI Act, a global first by any regulator anywhere, identifies three risk categories for the development of new AI (artificial intelligence) technologies. Some systems exhibit unacceptable risk levels, such as the government-run social scoring of China’s social credit system). Other high risk systems, such as CV-scanning tools for jobs, are subject to specific legal requirements. The act also allows for limited oversight of emergent technologies that are not deemed to be high or unacceptable risk.

The AI Act offers one approach the United States could follow. At a minimum, it demonstrates a considered path forward to legal regulation of AI. Congress must work with technologists to develop responsible principles, both domestically and with our international allies and partners, for AI. There have been some unsuccessful legislative efforts to move in this direction, such as the American Data Privacy and Protection Act and the Health and Data Location Privacy Act. These bills offer early efforts at data privacy protection. They limit what data can be gathered and what data can be sold, but so far have not passed. Such data regulations are also still far away from national AI standards that govern how models can work with user data.

The next, essential step is to work within the United States and with our allies and partners to map AI research practices and develop guardrails around AI research and technologies that have legal consequences. The Declaration for the Future of the Internet, signed by 65 countries, and the Blueprint for an AI Bill of Rights that came from the Summit for Democracy are important steps in such collaborative thinking. Unfortunately, the pace of technological development and the power of new AI technologies render such a non-binding agreement ineffectual if something goes wrong, or if companies do not share proprietary information about their new technologies.

How to define public interest as it relates to what AI can or should do remains up for debate. What is clear is that the U.S. needs to follow the example of other developed economies and have this debate prominently, broadly and publicly, and with the force of potential legislation behind it. Regulating individual firms may provide talking points to voters, but it also offers cover for actors developing world-changing technologies while paying little attention to interests beyond their own profitability.

Aynne Kokas is the C.K. Yen Chair at the University of Virginia’s Miller Center and the author of “Trafficking Data: How China is Winning the Battle for Digital Sovereignty.” Follow her on Twitter @AynneKokas.

Tags Artificial intelligence Data collection tech industry

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more