The views expressed by contributors are their own and not the view of The Hill

Artificial Intelligence: Is it safe?

FILE - In this Nov. 29, 2019, file photo, a metal head made of motor parts symbolizes artificial intelligence, or AI, at the Essen Motor Show for tuning and motorsports in Essen, Germany.

This month, the White House Office of Science and Technology Policy (OSTP) issued a “Blueprint for an AI Bill of Rights.” The document maps out areas in which artificial intelligence (AI) might be a threat to our existing rights and establishes a set of rights that individuals should expect to have as they use these emerging technologies.

The OSTP blueprint sends two messages. First, it acknowledges that AI is affecting — and likely will transform — everything: changing medical practices, businesses, how we buy products and how we interact with each other. Second, it highlights the fact that these technologies, while transformational, can also be harmful to people at an individual, group and societal scale, with potential to extend and amplify discriminatory practices, violations of privacy or systems that are neither safe nor effective.

The document establishes principles for our rights in the digital world. The next step is to determine how to operationalize these principles in practice.

While there are many ways to think about this translation of principle into practice, it is helpful to ask one simple question: “Is it safe?” — and, if the answer is unknown, do the work to create a science of safety to provide it.

It’s useful to look to other technologies and products that changed our world. Electricity, automobiles and telecommunication all radically changed the way people work, live, interact with each other and do business. Each had — and still has — issues with the way they affect us, but for all of them, we ask a single, pointed question: Is it safe?


Electricity gives us light, power and warmth, but if the cost is fires that burn down our homes, its benefits are meaningless. Cars are synonymous with freedom, but what good is freedom if our ongoing driving destroys our planet? And the world of communication links us together, but not if­­ the content that flows is misleading, hateful or damaging to our children.

For all these technologies, the question of safety is a constant. We always want these technologies to be useful, and we want them to be safe. To make them safe, we need to determine the conditions under which they would be damaging and put practices in place that prevent, mitigate, or resolve their potential harms.

As we consider the future of AI, we need to make the same demand and uncover the decisions and conditions that lead to harmful effects. We need to determine how to make intelligent technologies useful and safe.

Safe in that medical diagnostic systems must be trained with inclusive example sets so that individuals and groups are not excluded and undertreated.

Safe in that systems that help us make decisions need to be designed to help us — rather than manipulate us to take less so that someone else can take more.

Safe in that systems evaluating someone’s credit, skills for a job or fit for college have not been trained on historical data in which these decisions were biased and thus make the future as sexist, racist and tribal as the past.

Safe in that the people using systems that might make mistakes understand that they cannot arrest someone simply because the machine said so.

Unfortunately, it is sometimes difficult to argue with developers and the businesses that employ them that they need to change their behavior because they are treading on someone’s rights. The idea makes sense, but they struggle to understand the actions they need to take.

It may be more powerful to flip the conversation away from rights and toward responsibility: The responsibility to develop and deploy systems that are useful and safe.

If we are to uphold the rights outlined in the OSTP’s report, we need to develop a genuine science of safety for AI that we can translate into best practices for the people who are building it.

What specifically must we do to design and build systems with individual and societal safety at the center of that design? What new disciplines and sciences need to be established to equip us for the AI future? What mechanisms and tools are required to evaluate and monitor for safety? How do we develop and establish remediation approaches?

Establishing a safety ecosystem for AI requires more than policy, more than technological advances, more than good intentions. It isn’t a one-off solution or an example of a single system that causes no unforeseen harm. It requires a spectrum of interdisciplinary, multipronged, iterative sociotechnical contributions coming together toward the responsibility of safety.

Kristian Hammond is the Bill and Cathy Osborn Professor of Computer Science at Northwestern University. An AI pioneer and entrepreneur, Hammond is cofounder of Narrative Science, a startup that uses AI and journalism to turn information from raw data into natural language.