Artificial intelligence seems poised to transform our daily lives. From digital assistants to self-driving cars, the technology promises convenience on a scale never before known to mankind. But AI also poses novel risks — and many fears — related to security, privacy and job loss. That’s spurred policymakers to propose new regulations to prevent future problems.
Unfortunately, our elected officials and regulators may get more wrong than right when it comes to regulating AI.
Consider all the many smart devices now an integral part of our lives. Digital helpers like Siri and Alexa use AI to understand requests, recommend products and services, and control connected gadgets in the home, like lights, thermostats, and appliances. Regulators worry these devices might collect too much personal data, so privacy rules could restrict what data can be gathered or shared. But those same rules aimed at protecting sensitive user information could unintentionally limit functionality people appreciate and make devices more expensive to produce and purchase.
Social media platforms, too, use AI algorithms to rank the content users see, elevating content for which users have demonstrated a preference. But critics allege these systems promote misinformation and exacerbate political polarization. Legislation might mandate transparency about how these algorithms work and provide users with a recourse mechanism against perceived unfair treatment. Maybe these rules could improve some aspects of online discourse, but they might also lead to censorship through the removal of content that, while controversial, still falls within the bounds of free speech, thereby stifling diverse perspectives.
When applying for loans or insurance, AI tools sometimes predict a borrower’s risk of default. But if AIs are trained on data that reflects historical discrimination, the models’ recommendations may perpetuate those biases. To prevent prejudice, regulators want to audit these systems. Eliminating bias is a worthy goal, but defining it is complicated and excessive compliance costs could curb access to affordable financial services.
Facial recognition technology has raised alarms about governmental surveillance. Some states are restricting law enforcement’s ability to utilize the technology to identify suspects. This may safeguard privacy and circumvent cases of mistaken identity, but restricting these tools also reduces crime solving capabilities.
With employment, AI can streamline hiring decisions and even evaluate worker performance. But again, bias concerns abound. New York City passed a law mandating disclosure of algorithms used in hiring and now imposes third-party audits on these systems. While perhaps protecting applicants, such rules could add to the costs of hiring, discouraging job creation and leading to less productive workplaces.
In healthcare, AI promises more accurate diagnostics and treatments personalized to your genetics and lifestyle. But questions persist about accountability if AI makes an incorrect diagnosis or recommends an inappropriate treatment. Strict regulatory approval processes might prevent some of these problems but could also limit access to breakthrough therapies and drugs.
Across the board, a new generation of AI activists are calling for centralized regulatory bodies to oversee and control AI systems. But one size will not fit all. An expansive, inefficient federal bureaucracy will be ill-equipped to keep pace with rapid technological change. Rather than cast a broad net over technology as a whole, the merits of regulatory proposals should be weighed individually, with reforms tailored to specific uses of AI and their corresponding risks.
Soon, AI will come to shape our daily experiences as consumers, workers and citizens. But regulation could constrain its vast potential to improve life. The future need not be dystopian if we approach these complex debates with reason and a dose of humility rather than fear.
James Broughel is a senior fellow with the Competitive Enterprise Institute, a free market public policy organization based in Washington, D.C.