The bipartisan Artificial Intelligence Research, Innovation, and Accountability Act of 2023, will define terms related to generative AI, including what is considered “critical-impact,” as well as create standards for AI systems to follow.
Under the bill, “critical-impact” AI systems would include those used to make decisions that have a significant impact on the collection of biometric data without consent, the management and operation of critical infrastructure, and criminal justice.
Notably, the bill would set in place a system to require “critical-impact” AI organizations to self-certify as meeting compliance standards.
The proposal would task the Commerce Department with submitting a five-year plan for testing and certifying “critical-impact” AI. The department would also be required to regularly update the plan.
The bill, which is co-sponsored by Roger Wicker (R-Miss.), John Hickenlooper (D-Colo.), Shelley Moore Capito (R-W.Va.), and Ben Ray Luján (D-N.M.), would also direct the National Institute of Standards and Technology (NIST) to develop standards about the authenticity of online content.
NIST would also be tasked with developing recommendations to agencies for technical, risk-based guardrails on “high-impact” AI systems.
“High-impact” AI systems refer to those that are developed for the purpose of making decisions that have significant impacts on people’s access to housing, employment, credit, education, healthcare or insurance.
Companies deploying “high-impact” AI systems would be required to submit transparency reports to the Commerce Department.
Read more in a full report at TheHill.com.