The views expressed by contributors are their own and not the view of The Hill

Do we need a National Algorithms Safety Board?

In the United States, the National Transportation Safety Board is widely respected for its prompt responses to investigate plane, train, and boat accidents. Its independent reports have done much to promote safety in civil aviation and beyond. Could a National Algorithms Safety Board have a similar impact in increasing safety for algorithmic systems, especially the rapidly proliferating Artificial Intelligence applications based on unpredictable machine learning? Alternatively, could agencies such as the Food & Drug Administration (FDA), Securities and Exchange Commission (SEC), or Federal Communications Commission (FCC) take on the task of increasing safety of algorithmic systems?

In addition to federal agencies, could the major accounting firms provide algorithmic audits as they do in auditing financial statements of publicly listed companies? Could insurance companies provide guidance for the tech community as they do for the construction industry in making safer buildings? We already see civil society and business groups such as Underwriter’s Laboratories developing a Digital Safety Research Institute, to provide the evidence-based research for safer algorithmic systems.

Algorithmic systems can cause deadly outcomes — as in the twin crashes of the Boeing 737 MAX — and other consequential harms, such as rejecting a job application, denying a bank loan, or falsely identifying someone as a perpetrator of a crime. Furthermore, malicious actors may use algorithmic systems for cybercrime, terrorism, hate speech, or political oppression.

To counter these threats, the Association for Computing Machinery’s TechBrief on Safer Algorithmic Systems calls for human-centered social systems to provide independent oversight and safety culture strategies. These governance structures provide technical and organizational strategies that could benefit developers and managers in every aspect of life: from work to healthcare, finance to insurance, housing and transportation, and manufacturing to online shopping.

Governments around the world, particularly in economically developed nations, have responded to the peril and promise of algorithmic systems by advancing policies and proposing laws to mitigate their risks in all sectors of society. In the U.S., the White House Office of Science and Technology Policy recently released its Blueprint for an AI Bill of Rights. The first of its five policy principles identified “safe and effective automated systems” as a national priority.

So, could a National Algorithms Safety Board provide valuable insights to improve future designs? Yes. The final framework of a governmental/industry oversight process is likely to be more closely tied to each application domain, but there are lessons that will apply across domains. Government, industry, and non-profit organizations such as the Association for Computing Machinery should see each other as collaborators for the common good.

The ACM TechBrief makes two overarching points: 1) safer algorithmic systems will require multiple forms of sustained internal and independent oversight, and 2) organizational safety cultures must be broadly embraced and routinely woven into algorithmic system development and operation.

Those developing algorithmic systems can draw lessons from the aviation industry, where senior management provides a clear vision for safety and puts extensive financial and human resources behind that vision. The aviation industry carefully tracks performance, recording failures and the much more common near misses. These reports provide valuable early lessons both about possible failures and the strategies that employees used to prevent near misses from becoming failures. The implementation of the AI Incident Database with over 2,000 reports provides a valuable resource to understand what problems have occurred and what remedies were installed.

Designers and policy makers can also learn from the aviation industry’s practice of relying on a flight data recorder to investigate how a plane crashed. Industry leaders such as the Association for Advancing Automation should encourage some form of black box for algorithmic systems that will store relevant data in a secure system, which can be investigated in the event of a catastrophe.

And as with aircraft certification, drug approval, and medical review boards, solid research comes first. Business, government, and academia all have an important role to play in supporting research on safer algorithmic systems, establishing human factors safety research, and applying hazard analysis policy-making priorities.

And returning to the example of the pharmaceutical industry, just because a drug or medical device has been approved by the FDA, it is not guaranteed to be safe. In fact, in the U.S. each year, manufacturers recall hundreds of drugs and medical devices that already have received FDA approval. Ensuring safety is an ongoing effort, in which industry plays a lead role.

The ACM TechBrief makes the same point: Perfectly safe algorithmic systems are not possible, but safer systems are.

As AI and algorithmic systems continue to be designed for widely used commercial systems, we can’t fully predict all the ways these systems may go wrong. However, designers, business leaders, and policy makers do have it in their power to embrace a human-centered safety culture, which can make for safer algorithmic systems.

Ben Shneiderman is distinguished university professor emeritus in the Department of Computer Science at the University of Maryland and founding director of its Human-Computer Interaction Laboratory. He is a member of the National Academy of Engineering, a fellow of the Association for Computing Machinery and is the recipient of six honorary degrees in recognition of his contributions to human-computer interaction and information visualization. His most recent book is “Human-Centered AI” (Oxford: Oxford University Press, 2022).

Tags AI bill of rights algorithm Algorithmic transparency algorithms Artificial intelligence cyber incident data government oversight internet regulation Machine learning White House Office of Science and Technology Policy

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more