The Federal Trade Commission (FTC) recommended Congress proceed with caution in promoting artificial intelligence (AI) tools to combat online harm in a report the commissioners voted to publish Thursday.
The report concluded that “great caution is needed in either mandating the use of or over relying on these tools even for the important purpose of reducing harms,” according to a summary delivered by FTC staff attorney Michael Atleson at Thursday’s commission meeting.
“While continued innovation in this area and research is important, Congress should not be promoting the use of these tools and to focus instead on putting guardrails on their use,” he said.
Tech companies need to be more transparent about and accountable for the use and impact of AI tools to figure out what the guardrails should be, he said.
The commission voted 4-1 to advance the report, with Republican Commissioner Christine Wilson joining the three Democrats. Republican Commissioner Noah Phillips dissented based more on concerns about the process by which the report was conducted than on its conclusions.
The report was conducted as part of a mandate from Congress in December 2021.
Although Wilson joined Democrats in voting to publish the report, she voiced some reservations.
Wilson said she agreed with the report’s recommendation that Congress “should generally steer clear of laws that require, assume the use of, or pressure companies to deploy AI tools to detect harmful content.”
But she said she is concerned about the “extensive discussion of misinformation, inoculation and pre-bunking in the report.” The “vast bulk” of information online falls “somewhere between” the verifiably true and false categories.
“I worry that the swift labeling of ideas as misinformation, inoculation and pre-bunking of ideas will stymie the development of new theories, research and ideas. The answer to speech that we view as incorrect or misguided is not suppression, but more speech that explains our opinion of the errors and presents an alternative perspective,” she said.
Phillips said, “on policy I generally agree with the top line conclusion,” but he had issues with how the report was carried out.
In part, he said the report did not gain information directly from individuals who use AI at tech companies, which he said was crucial to what Congress asked the agency to evaluate.
“Had the commission arrived at this conclusion after seeking input from stakeholders and engaging in more than a cursory analysis of the efficacy of the AI tools in combating the harms identified by Congress, I could have signed onto it. But it didn’t,” he said.