Self-regulation is the answer to our AI quandaries
One way AI is different from other technologies of our Information Age is that Americans want regulation from the start. People were at first happy to let Silicon Valley grow unrestrained with the internet, smartphones and social media — until censorship, free speech and misinformation problems made them angry. But this time, with AI, retroactive regulation won’t be enough. People want AI guardrails up front because of the warnings coming from so many people, even the AI leaders themselves.
But Americans don’t trust the government to build those guardrails. 62 percent of voters would like the tech industry, not lawmakers, to spearhead AI regulation, according to a recent Harvard CAPS/Harris Poll. Almost equal proportions of Democrats, Republicans and independents agree: They don’t want AI companies to be held back by out-of-touch lawmakers or to stop completely, but they want these pioneers to keep themselves in check.
What would stepped-up self-regulation look like? It would start with labeling and making sure that the source of the information, if publicly available, can be traced to its origin. Computers are not people and it’s dangerous when they pose as people. AI, under existing codes, is obligated to make that clear, but that line is becoming awfully fuzzy. And certain use areas would be off limits — such as controlling devices and weapons that could be turned on humans.
Americans are skeptical of our lawmakers’ ability to regulate because the government completely failed to rein in social media in the last decade. When it comes to social media and free speech, the only thing people like less than the tech companies are the lawmakers. Mark Zuckerberg’s biggest PR win was the 2018 hearing where some senators revealed that they didn’t know Facebook made its money from advertising. And even when there is bipartisan agreement — such as on distrust of TikTok or desire for stricter data privacy protections — it hasn’t translated into major policy.
Because of Washington’s recent tech fumbles, people have forgotten that our government used to be an effective responder to emerging tech. During the rise of broadcast television, the federal government quickly established broadcast rating standards in response to controversy about potentially inappropriate material going out to millions of family homes. The Telecommunications Act of 1934 and the establishment of the Federal Communications Commission were followed by the National Association of Broadcasters’ voluntary establishment of a Television Code. This public-private cooperation set the tone for successful self-regulation of the television industry for the next fifty years.
There are some reasons to be optimistic that collaboration on AI regulation could happen. One is that both sides are talking about the need for it, and all the major tech executives are coming to meetings convened by Senate Majority Leader Chuck Schumer. CEOs have already willingly sat for congressional hearings, and major tech companies have given voluntary commitments on AI risks to the White House.
Another upside is that AI is not, at this point, a partisan issue. According to the Harris Poll, Republicans are more concerned about job losses and Democrats are a little more optimistic about the technology overall, but voters from both parties agree that fears about AI are not overblown. They agree the top three concerns are that AI could spread misinformation, create massive fraud and trigger massive unemployment. Stopping the technology’s effect on employment may not be possible, but fraud and deliberately false information are likely areas of focus.
If AI does become more polarized, expect people’s desire for regulation to increase. Social media platforms fell into that quagmire once users started posting about politics, and now every moderation decision seems to infuriate all sides. As the 2024 election cycle heats up, politicians and media are already experimenting with AI-generated ads and content. The role of AI in politics could be the first hotly contested frontier of regulation.
Silicon Valley’s unique opportunity on AI regulation is that, despite years of tech backlash, people are still giving it the benefit of the doubt. Tech companies should listen to the American people and take the lead on self-regulation — while keeping Washington in the loop. The biggest test to come for AI will be striking the right balance between the old motto of “build fast and break things” and the increasing feedback to build guardrails.
Mark Penn is chairman of The Harris Poll and CEO of Stagwell Inc.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts