Back to the future: Look to the 1980s for guidance on AI management
Many have called to slow down the advancement of large language models, the kind of artificial intelligence (AI) that powers ChatGPT. But when it comes to AI development, we need to act now.
The federal government needs to form an AI Lead Rapid Response Team or ALRT to track and manage the rollout of AI technology. As a federally funded research and development center, this team would maintain an active list of AI risks to advance trustworthy AI, disseminate best practices for AI safety and help AI software producers verify systems to ensure consumer trust and protection.
We have proven models of how ALRT should be developed and implemented. Policymakers should consider an approach pioneered in the realm of cybersecurity at the dawn of the modern internet age — a similar time of promise, uncertainty and concern.
As network and internet technologies emerged in the mid-1980s, serious threats to the information carried on those networks soon followed. The U.S. government formed the Computer Emergency Response Team Coordination Center (CERT/CC) at Carnegie Mellon University. Now a division of the Software Engineering Institute, CERT maintains its mission and partners with government, industry and academia to improve the resilience of computer systems and respond to sophisticated cybersecurity threats. CERT led to the creation of the chief information security officer (CISO) council and enabled inter-agency coordination across the federal government. Other nations and allies adopted CERTs of their own (e.g., the EU-CERT, Indian CERT, etc.), allowing for information sharing across national boundaries.
While the threats of AI are of a different caliber than those from the dawn of the internet, a rapid response team of technology producers, federal policymakers and leading researchers could serve a similar role. ALRT would:
- Catalog AI incidents: Identify, catalog and categorize AI incidents — diagnosing and advising on responses to future AI threats.
- Share best practices: As stress tests and data audits to improve responsible AI emerge, there is a danger of inconsistent approaches and confusion as to which path to follow. ALRT would serve as the center for sharing a risk-informed view of best practices with government officials, business leaders and everyday consumers.
- Test and verify: By providing a voluntary version of an “underwriters laboratory” seal of approval, ALRT will certify the efficacy of AI technologies and applications, working in partnership with government and industry to develop data disclosure approaches, testing standards and reporting tools.
With this unified mission and federal funding, ALRT would form a proven industry and academia partnership, leveraging proprietary information in a trusted manner to combat the uncertainties of AI.
We’ve done this before, and we can do it again. There is no time for delay. We need to act now.
Ramayya Krishnan is faculty director of the Block Center for Technology and Society and dean of Carnegie Mellon’s Heinz College of Information Systems and Public Policy. Martial Hebert is the dean and university professor of Robotics at the School of Computer Science at Carnegie Mellon University.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts