The views expressed by contributors are their own and not the view of The Hill

Exploring the future of AI and biology can help us fight known diseases

A warning sign is posted to the door of a medevac biocontainment unit aboard a military transport plane at Dobbins Air Force Reserve Base during a media tour, Tuesday, Aug. 11, 2015, in Marietta, Ga.
A warning sign is posted to the door of a medevac biocontainment unit aboard a military transport plane at Dobbins Air Force Reserve Base during a media tour, Tuesday, Aug. 11, 2015, in Marietta, Ga. (AP Photo/David Goldman)

There is an emerging strain of inquiry about the risks of artificial intelligence and synthetic biology. Recent public hearings and discussions have sought to explore the risks and raise awareness of the potential nexus of these tools and their confluence with nation-state and terrorist actors.

While interesting and worth consideration, it is important that these discussions address the urgent necessity of correcting deficiencies in existing preparedness measures.  

The reality is that the risks of AI and biology are embryonic today, still overshadowed by known threats and challenges. The risk is not so much the AI-driven development of a novel virus or pathogen, but the natural emergence of viruses or the weaponization of known diseases. 

The failure to sustain vaccination programs is leading to the reemergence of diseases that were once far less common. The United Kingdom, for example, is experiencing alarming rates of measles, as are other countries in Europe. The U.S. is not immune to this development, with pockets of new cases emerging across the country. A Milwaukee resident recently tested positive for mpox, once again highlighting the potential risk of smallpox and associated viruses.  

While it makes for thrilling headlines and novels, the risk that al Qaeda or the Islamic State will manufacture a new virus are markedly low. The former certainly had sought biological weapons, but it was not the creation of new viruses so much as the weaponization of existing pathogens such as anthrax. This will likely continue to be the case for the foreseeable future, although AI and the reduction in barriers to production could create an alarming confluence in the future. 

The real risk will still be seen at the nation-state level — AI and machine learning are of greatest use to those who already possess a systemic advantage and established programs. Here, too, the risks are greater in the use of existing viruses and pathogens, not the development of novel diseases.  

This is not to suggest that policymakers can or should be complacent about the risks of AI and biological threats. Far from it — the speed and potential risks that emerge at this nexus strengthen the need for implementing and reinforcing basic preparedness measures. 

Disease surveillance and monitoring, stockpiling, planning and exercising for pandemic response all become even more important in the era of AI-enabled biology. If anything, policymakers should use the attention on AI and biological threats as a mobilizing tool to refocus time, energy and resources on preparedness measures.  

Put simply, if we are unprepared for known pathogens and pandemic risks, we stand little hope of being prepared for or responding to novel viruses, regardless of how they are created or from where they emerge.  

Artificial intelligence also creates opportunities to enhance and accelerate preparedness efforts. Using AI and machine learning to develop new medical countermeasures is a natural next step in preparing for the next pandemic. Just as the next disease could well be created by AI, so too can the next vaccine, medicine or course of therapeutics. More ambitiously, it is possible to anticipate the likely path of the next mutation or viral evolution and plan accordingly for early interdiction. 

Leveraging these tools for planning and response will speed and smooth the delivery of critical interventions and better allocate resources in a crisis environment. Specially trained algorithms could aid disease surveillance, detecting the emergence of a known or novel pathogen far faster than human observation.  

These hopes and expectations about the power of AI in biology must be tempered by the limitations associated with AI. It is not a magic wand that will solve all the problems of biology and human nature. Algorithms trained on poor-quality or incomplete data sets will provide bad answers. Biases programmed into the system and left uncorrected will yield biased outcomes. 

Moreover, the algorithms and data resulting from AI are only as good as human use allows. The warning lights could well be flashing, but if the system doesn’t respond accordingly, the alert is useless. The challenge is not technology, but human fallibility.  

While looking to the future, it is vitally important not to lose sight of the present. The risks of synthetic biology and AI-enabled virology are certainly real, but not yet fully materialized. 

It is, however, increasingly the “bright shiny object” in policy discussions, and one that could overshadow both immediate threats and the needed immediate countermeasures that policymakers must take to strengthen national and international preparedness.  

Joshua C. Huminski is director of the Mike Rogers Center for Intelligence and Global Affairs at the Center for the Study of the Presidency and Congress. He is a George Mason University National Security Institute senior fellow and a nonresident fellow with the Irregular Warfare Initiative. 

Tags Artificial intelligence Biological warfare Ethics of artificial intelligence Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more