Technology

White House releases first government-wide policy to mitigate AI risks 

The White House on Thursday morning released its first government-wide policy aimed at mitigating the risks of artificial intelligence (AI), requiring agencies to take further action to report the use of AI and address risks the technology may pose.

Federal agencies will be required to designate a chief AI officer, report how they use AI and add safeguards as part of the White House memo.

The announcement builds on commitments President Biden laid out in his sweeping AI executive order issued in October.

“I believe that all leaders from governments, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefits,” Vice President Harris said on a call with reporters.

The new memo and requirements help promote “the safe, secure and responsible use of AI” by the federal government, she said.


As part of the memo, agencies will have 60 days to designate a chief AI officer. That officer will coordinate the use of AI across their agencies.

The memo does not determine if the position will be a political appointee or not, and the administration expects that in some cases it will be and for other agencies it will not, according to a senior administration official.

Agencies will also be required to create “AI use case inventories” that list each of its AI uses annually, and submit the inventory to the White House Office of Management and Budget, as well as post it for the public.

As part of the inventory, agencies will need to identify which use cases are “safety-impacting and rights-impacting” and report additional detail on those risks.

Some AI use cases will not be required as part of the inventory, such as ones used in the Department of Defense, the sharing of which would be “inconsistent with applicable law and governmentwide policy,” according to the memo.

Agencies are also required to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety by December, according to a fact sheet released by the White House. For example, it would require travelers to have the ability to opt out of the use of TSA facial recognition at airports without delay.

Agencies that cannot apply the safeguards must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights or impede on critical agency operations, according to the fact sheet.