The views expressed by contributors are their own and not the view of The Hill

What Dolly the sheep can teach us about regulating AI

AUCKLAND, NEW ZEALAND – JANUARY 20: Sheep are pictured in the dry conditions at Ambury Farm on January 20, 2015 in Auckland, New Zealand. (Photo by Phil Walter/Getty Images)

“Never before in history has humanity been so unprepared for the new technological and economic opportunities, challenges, and risks that lie on the horizon. By 2025, we and our children may be living in a world utterly different from anything human beings have ever experienced in the past.”

You may be surprised to learn that this quote is neither recent nor about artificial intelligence. It was written 25 years ago by economist Jeremy Rifkin following the cloning of the sheep known as Dolly.

Few topics have stoked fear and confusion like cloning and biotechnology. But as their applications have become more mainstream and beneficial to society, they offer lessons for policymakers and the public alike as we grapple with anticipation and anxiety over the impact of artificial intelligence (AI).

The mixed emotions about generative AI are legitimate. We feel them ourselves. But the reality is that AI (which, much like biotechnology, has been around for decades) can be a largely positive and responsible force for society — if the path forward follows a set of guiding principles. To take hold, these principles must be developed with input from leaders in the private, nonprofit and academic sectors. 

After years of neglect, policymakers are now in a race to act. It’s tempting to rush forward given the daily whirlwind of speculation about generative AI in the media. However, we urge caution and learning from history, especially regarding the risks of overregulation and excessive centralization of AI rules.

What’s reassuring is that the U.S. has a long track record of balancing an embrace of innovation with a commitment to thoughtful regulation.

While there are differences between biotech and AI, commonalities abound. In each case, public officials generally lack basic knowledge of the applicable science. Few members of Congress understood the potential benefits and many overestimated the risks of biotechnology. Fortunately, sensible people at leading universities proposed research restrictions on biotech labs in Cambridge, Massachusetts, almost identical to those applied to nuclear weapons research.  

Today, Massachusetts (alongside California) dominates the discovery of new medicines and has advanced human health. This is due in large measure to Ted Kennedy, the state’s liberal senator, who counterintuitively stepped in to prevent regulatory excess. 

Similarly, the Nobel Prize–winning discovery of genome editing in 2012 sparked concerns about controlling the future of humanity. However, society adopted an approach of forbearance, with temporary pauses on certain research areas, allowing for the evolution of self-regulation and new rules. Scientists, civil society and the private sector all played a role in leading these efforts, while responsible companies pursued business goals grounded in public trust, self-regulation and responsible restraint. The resulting positive effects of genomic science in treating diseases, enhancing detection, and improving food security has been profound.

Similarly, AI (and its more accessible form, generative AI) holds the potential for broad societal improvement. To navigate this path responsibly and realize its potential benefits, policymakers must seek input from both technical experts and those whose lives will be affected. Collaboration with the private sector will play a crucial role in shaping a nuanced and comprehensive understanding of which laws and regulations will be necessary.

The Biden administration’s AI white paper and guidance from legislative leaders on both sides of the aisle provide a foundation for a responsible path forward. We can also learn valuable lessons from European leaders, who recently passed governance legislation to protect privacy and limit the use of AI, especially by police authorities.

Companies and civil society can lead the way by ensuring AI literacy among decisionmakers and promoting accountability through transparent disclosure. Shedding light on real-world AI applications will help lawmakers better understand both the benefits and risks. One-size-fits-all approaches will be impractical, but subject-matter-specific rules can be refined by learning from astute and careful adopters of AI technology in everyday life.

By following these guidelines, Congress and the executive branch can build a system that advances the human condition while safeguarding against misuse. Embracing guiding principles and making use of the private sector’s expertise will help policymakers navigate the complexities of AI and ultimately harness its potential to improve life on Earth.

Christopher Caine is president of the New York-based nonprofit Center for Global Enterprise, and host of “The GET,” a podcast for enterprise leaders. David Beier, a San Francisco-based venture capitalist, was a senior executive at Amgen and Genentech and former chief of domestic policy for Vice President Al Gore.

Tags Artificial intelligence artificial intelligence regulation Biotechnology Cloning Genomics

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴

THE HILL MORNING SHOW

Main Area Bottom ↴

Most Popular

Load more