Hurd says he was ‘freaked out’ by briefing while on OpenAI board
Former Rep. Will Hurd (R-Texas) said in an op-ed Tuesday that he was “freaked out” by a briefing while serving on the board of ChatGPT-maker OpenAI and called for guardrails on the development of “artificial general intelligence (AGI).”
“At one point in my two years on the board of OpenAI, I experienced something that I had experienced only once in my over two decades of working in national security: I was freaked out by a briefing,” Hurd wrote in the op-ed in Politico Magazine.
The briefing was about the artificial intelligence (AI) system now known as GPT4, which Hurd suggested represented “the first step in the process” of achieving artificial general intelligence, a still hypothetical form of AI that has human-like capabilities and can learn on its own.
“Indistinguishable from human cognition, AGI will enable solutions to complex global issues, from climate change to medical breakthroughs,” Hurd said. “If unchecked, AGI could also lead to consequences as impactful and irreversible as those of nuclear war.”
The former Texas representative, who stepped down from OpenAI’s board in June to run for president, pointed to the recent turmoil at the company with CEO Sam Altman’s high-profile ouster and return in calling for guardrails on the rapidly developing technology.
“As this technology becomes more science fact than science fiction, its governance can’t be left to the whims of a few people,” Hurd said. “Like the nuclear arms race, there are bad actors, including our adversaries, moving forward without ethical or human considerations.”
“This moment is not just about a company’s internal politics; it’s a call to action to ensure guard rails are put in place to ensure AGI is a force for good, rather than the harbinger of catastrophic consequences,” he added.
Hurd argued that AI should be held accountable to existing laws and that developers should compensate creators whose work is used to train AI systems.
He also called for a permitting process for powerful AI systems, in which developers would apply for a permit with the National Institute for Standards and Technology (NIST) before releasing their products.
“Just like a company needs a permit to build a nuclear power plant or a parking lot, powerful AI models should need to obtain a permit too,” Hurd said. “This will ensure that powerful AI systems are operating with safe, reliable and agreed upon standards.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts