The views expressed by contributors are their own and not the view of The Hill
The ethical conflict between surveillance capitalism and artificial intelligence
Like any other multi-purpose technology, artificial intelligence (AI) can be used in ways that are either beneficial or harmful to humankind. The negligent or malicious use of AI could be unusually harmful, however, due to its unique characteristics.
{mosads}This doesn’t mean that development of AI should or could be stopped. AI is also expected to be an incredibly beneficial technology, and given that its use is already widespread, it’s too late to put the AI genie back in the bottle. But AI companies should be proactive in creating and enforcing safeguards that discourage malicious use of their technology. The private sector should take the lead in policing problems before they occur.
AI’s characteristics
Some of the characteristics that make AI so beneficial are also why its misuse could be particularly harmful.
One of AI’s most prominent benefits is its efficiency. AI can already perform some tasks more quickly and at lower cost than humans. According to a survey of experts conducted by researchers at Oxford and Yale, there is an even chance that AI will outperform humans in all tasks in 45 years—including tasks that further criminal enterprise.
Another potentially harmful characteristic of AI is that it can be more easily duplicated and diffused than human expertise and effort. For example, once AI software is “trained” to do facial recognition, using the same software with additional camera feeds is a trivial matter compared to the cost of training and employing a human analyst for each additional feed.
Psychological issues must be considered as well. The anonymity and disassociation afforded by AI can act as psychological buffers that lower a user’s accountability or inhibitions. An example is the emotional distance from combat that’s been experienced by operators of weaponized drones.
The ethics of AI
These characteristics have prompted efforts to develop principles for the ethical use of AI in order to limit the potential for its negligent or malicious use, including transparency, responsibility, and consent.
In terms of transparency, disclosure is critical to building public trust in AI systems. For example, when humans interact with AI in a social dimension (e.g., by texting with a chatbot), they should know they’re communicating with a machine and not a human being.
With respect to responsibility, AI companies should replace the “move fast and break things” approach with a culture of responsibility for their role in developing and diffusing AI technologies.
And as for consent, when AI collects and uses personal information, human oversight and reasonable privacy protections are critical.
The conflict with surveillance capitalism
Fostering an ethical approach to the development and use of AI could be a difficult prospect for Google, one of AI’s leading companies, due to its reliance on the collection and exploitation of personal data to make money—a business model known as “surveillance capitalism.”
There are inherent conflicts between surveillance capitalism’s business incentives and the principles of ethical AI.
Consider Google’s live demonstration earlier this month of Duplex, an artificial intelligence technology for Google Assistant that schedules appointments by telephone using a voice that sounds remarkably real, complete with human tics like “mm-hmm’s” and “uh’s.” In the demo, it appeared the AI called a hair salon and a restaurant without any warning they were talking to a machine, and the humans on the other end of the line appeared to have no idea the “caller’s” voice was artificial. It also appeared that the AI recorded the resulting conversations without the humans’ knowledge (a practice that’s illegal in many states).
The use of AI to feign humanness while recording a telephone call with an unwitting hair salon or restaurant employee implicates the ethical AI principles of transparency, responsibility, and consent, and raised important questions about the ethos of Google’s business model.
To its credit, Google responded rapidly to the ensuing backlash by issuing a statement that addressed the transparency issue. Google said it’s “designing this feature [Duplex] with disclosure built-in, and we’ll make sure the system is appropriately identified.”
While this statement was presumably intended to be reassuring, at least to some extent, it shouldn’t have been necessary. Someone at Google should have realized that duping people on the phone was inappropriate before it decided to do the demo.
Why did Google misjudge this issue so badly?
The most likely answer is the culture that’s created by surveillance capitalism. When Google offers “free” stuff in exchange for access to users’ data, its end-users become its product. Google’s real customers are the advertisers and other businesses who actually pay Google to influence consumer behavior (e.g., through targeted advertising).
With respect to its end users, Google has little incentive to follow the retailer policy of “the-customer-is-always-right.” Google’s incentive is to maximize the amount of personal data it can extract from its end users (and their friends and the people with whom they do business) while keeping its end users just happy enough that they’ll keep using Google’s free stuff.
The need to ensure its end users remain reasonably satisfied gives Google an incentive to be secretive about the data it collects and how the data is used. As Mark Littlewood recently wrote in The Times, “the real threat to social media giants lies in people realising the value of their data.” By obscuring the full extent and effect of its data exploitation from end users, Google makes it harder for them to know whether their getting a bad deal and makes it less likely that they’ll reconsider the bargain.
The same incentive for secrecy applies to Google’s relationships with its real customers (e.g., advertisers). Keeping advertisers in the dark about the ways online ad distribution systems actually work makes it harder for them to accurately value their ad buys. As a result, today’s advertisers often don’t know exactly what they’re buying in the “murky world” of online advertising.
Thanks to surveillance capitalism, AI’s ethical principles simply aren’t part of Google’s DNA. This must change. Applying the ethos of surveillance capitalism to AI is a recipe for disaster.
Fred Campbell is the former FCC Wireless Bureau Chief. He is currently the Director of Tech Knowledge.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts