The views expressed by contributors are their own and not the view of The Hill

Our government shouldn’t use the public sector as a guinea pig for AI 

Getty Images

Last month, the City of New York came under scrutiny when its AI-powered chatbot was shown to be dispensing incorrect information, encouraging small business owners to break laws and violate worker and tenant protections. When asked about the chatbot’s shortcomings, first reported by investigative outlet The Markup, New York City Mayor Eric Adams responded that “[a]ny time you use technology, you need to put it out into the real environment to iron out the kinks.”  

Weeks later, the chatbot is still up, running, and dispensing faulty advice — and any “kinks” being ironed out are coming at the expense of real people.  

While the “move fast and break things” philosophy that Adams evoked may still hold sway among Silicon Valley entrepreneurs, it is a terrible guide for the public sector, since governments are responsible for the consequences of those breakages. The New York City chatbot episode is a perfect illustration of how premature adoption of new technology, and AI in particular, can create costs for governments — and the public — that far outweigh the benefits.  

Built by Microsoft and released in October as part of the New York City Artificial Intelligence Action Plan (touted as the first of its kind for a major U.S. city), the chatbot is hosted on the website of the Department of Small Business Services with the stated goal of providing business owners with “access to trusted information” gleaned from official city sources to help them “start, operate, and grow businesses.” That seems innocuous enough. And what business owner wouldn’t be enticed by the promise of an immediate, straightforward answer instead of a tedious, all-too-familiar clickabout to find the right FAQ or form or phone number?  

If it were well implemented, the chatbot might conceivably have boosted the city’s efforts to streamline and improve public services. Instead, the chatbot has raised a host of potential problems for the city government while placing its residents in harm’s way.  

For example, according to The Markup investigation, the chatbot falsely stated that employers could take worker’s tips. On paper, New York City has some of the strongest labor protections in the United States. But these laws are difficult to enforce, even more so when a government sanctioned chatbot is dispensing false information about it to business owners. And because wage theft reports are complaint-based, initiated by workers, such false information is likely to deter workers from filing a complaint. If workers suspect that their rights are being violated by having their tips withheld, employers can counter their claims, backed by an AI chatbot that has a veneer of authority and legitimacy because it is deployed by the City of New York. 

Protecting worker rights is already challenging, and technical systems can make it even harder. Research by Data & Society has demonstrated how automated systems can scale unpredictability in work through scheduling software, while tip theft can be automated on platforms like Amazon Flex and Instacart. In fact, Amazon Flex was fined $60 million by the Federal Trade Commission for this practice. Existing laws like tip protection legislation and fair scheduling laws can hold employers accountable, regardless of the tools they use, but labor protections are only as good as their enforcement.  

A recent report by Data & Society and Cornell University looked at an NYC law that required employers to notify job applicants if they are using automated employment decision tools in the process of hiring or promotion; they found that compliance with the law appears to be astonishingly low and its utility to job seekers was limited.  

In dispensing false information, cities could also be creating legal troubles for themselves and businesses. In a recent case, Air Canada lost a small claims court case brought by a passenger who said the airline’s AI chatbot misled them about its bereavement policy. If the company in question was, instead, a government, it could be liable for providing false information — and workers in turn could sue their employer for putting them in a position to act on false information and break the law. 

The public should have opportunities to offer input into what technologies are introduced in public administration, since they interface with these agencies and could be adversely impacted by the AI systems they deploy. Ultimately, it’s an issue of trust: If the public can’t trust their democratically elected governments to know their rights — and these technological intermediaries are representative of those governments — it is unlikely they will trust those same institutions to protect their rights. 

With governments on pace to adopt more technology, it’s imperative that any new tools are thoroughly evaluated and tested before they are released into the world. AI has the potential to dramatically improve many government processes and could help the cities provide better services. But if technologies are poorly designed, without attention to how they are integrated into society, they could change power relations and how people relate to their governments. In this case, the more likely outcome is the further erosion of trust in public institutions — and undermining the very laws and rules the city is responsible for clarifying and protecting. 

Aiha Nguyen is the program director of the Labor Futures program at Data & Society, which seeks to better understand emergent disruptions in the labor force as a result of data-centric technological development, and create new frames for understanding these disruptions through evidence-based research and collaboration. 

Tags Artificial intelligence Chatbot Eric Adams Government New York City

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more