The views expressed by contributors are their own and not the view of The Hill

Deep integrity: Building an accountable AI culture

In my former position as the government’s top Intelligence Community watchdog, I raised an urgent concern related to the Ukraine whistleblower’s complaint, which led to former President Trump’s first impeachment. Less well known is the urgent concern I raised in April 2019 about the lack of government funding for AI oversight. It remains a concern, now.

The U.S. must spend billions of dollars on artificial intelligence research to be “AI-ready” by 2025 and successfully compete with its adversaries, according to findings last month from a commission Congress established to examine ways to advance the development of AI for national security purposes. Currently, according to the report, “the U.S. government is a long way from being ‘AI-ready.’” To avoid falling behind AI-enabled competitors and geopolitical rivals, the report recommends bold investments in research to spur domestic AI innovation, with the goal of the Defense Department spending $8 billion per year in AI by 2025, and the U.S. spending $32 billion per year in non-defense R&D for AI by 2026.

To ensure integrity and trust in AI, the breakneck pace of development must be matched by investments in oversight. To date, that has not been the case.

When I spoke out in April 2019, I noted an investment asymmetry between spending on AI for mission performance — which receives hundreds of millions of dollars in funding — and the minuscule amounts spent on oversight. I warned this investment asymmetry could lead to an accountability deficit, where intelligence oversight professionals, particularly privacy and civil liberties officers, would lack the funding needed to oversee AI effectively. The unintended, but inevitable, outcome of investment asymmetry would be reduced trust in AI.

Despite some progress, my concern persists, and extends beyond the U.S. defense and intelligence communities.

Two months ago, the European Union pledged more than $150 billion in this decade to develop next generation digital industries, such as AI. These massive public and private investments in AI create transformative technologies across governments and industries, but AI brings vastly new risks, and manifests those risks in perplexing ways. AI use is already widespread and embedded in vendor-provided software, hardware, and software-enabled services. Organizations are currently employing incredibly valuable, but potentially risky, forms of intelligence that may not be aligned with a culture of integrity. Responsible organizations do not accept such unchecked risks in the human forms of intelligence they employ. They should not do so with AI, either.

Among those that will be severely affected and tested by AI are lawyers and compliance officers. It may seem incongruous that these professionals will bear a heavy load. After all, AI depends on data, hardware, software, data scientists, and IT systems, which are not the natural domains of most lawyers and compliance officers. AI, however, will not be fully implemented nor fully beneficial unless people trust it. Oversight professionals play a critical role in maintaining a trusted workplace, and they will have a similar role with AI.

One of AI’s greatest strengths is it can engage in behavior typically associated with human intelligence — such as learning, planning, and problem solving. If we are to trust AI to mimic human thinking in lawful and ethical ways, AI must act lawfully and ethically. To foster a culture of integrity, AI should be held to human standards. AI needs to be trained to act in ways that align with a culture of integrity, to report alleged wrongdoing, to cooperate meaningfully in audits and investigations, and to be held appropriately accountable for abuse, especially with respect to privacy and civil liberties.

For the most powerful forms of AI, we need to engineer moral courage and instill social consciousness into machines so they understand an organization’s objectives, legal limits, and core values, adopt them, reflect them, and retain them throughout their “life” cycles. They will also need to be protected from retaliation when they “speak up.” The proper, trusted, and effective use of AI, particularly for decisions with existential risks for organizations and profound consequences for individuals, will depend on thoughtful and vigilant oversight.

Currently, the largest impediment for most oversight professionals to help ready us for AI is not a lack of engineering, but lack of money and will, and this problem extends beyond government. We need to close the investment asymmetries and trust gaps that persist in government and insist on ethical AI guidelines and standards for the government and the private sector.

Given the profound benefits of AI for national security, the objective is not to reduce or delay spending on AI because it is risky, but to urgently accelerate spending on its oversight.

The nature of how AI technologies are designed, trained, and deployed make it imperative to build integrity into them at the design stage. The AI technologies of tomorrow are being built today. We should make sure we have the necessary resources so these technologies are built with integrity in mind and in the machines.

Michael K. Atkinson was the former Inspector General of the Intelligence Community in the Office of the Director of National Intelligence — chief watchdog of the nation’s 17 intelligence agencies; Previously he served in senior Justice Department roles spanning two decades. He is now with the law firm Crowell & Moring in Washington D.C.

Tags AI Artificial intelligence Computational neuroscience Donald Trump Ethics of artificial intelligence Machine learning oversight

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more