What lawmakers need to do to police online content
Platforms with laudable mission statements of making the world a better place and doing no evil now find themselves dealing with the dark side of human nature associated with the connected world they have created. Their content can be malicious, aimed at covert manipulation or stirring up dark emotions that trigger violence. Platforms like Facebook and Twitter realize this danger and are scrambling to keep bad content off their platforms. They have employed armies of humans, roughly 20,000 in Facebook’s case, to police platform content.
While such an action is commendable and should placate lawmakers and the public for now, it won’t work as a long term solution. Rather, the solution must be algorithmic, and while implementing “morals as code” will be challenging, it is the cleanest way to think about how such platforms can be regulated without violating the first amendment.
{mosads}Following the Pittsburgh synagogue shooting, many believe that platforms such as gab.com have “crossed a line” and should not be allowed to exist. But the more vexing question is, where is the line? How do we know whether an individual or platform has crossed it? Can machines help us find this line between okay and not okay, or is this an inherently human exercise?
Let us consider how machines might help. If we can create a sufficiently large dataset of cases that are labeled as okay and not okay, machine learning algorithms can induce our collective preferences, and help us find the line that separates them. Given enough data, they can provide media platforms with concrete guidelines for determining when that line is being crossed. While such machines will make mistakes — which means we will still need humans and courts to resolve complex or ambiguous disputes — they can go a long way in flagging content that crosses the line.
On the other hand, expecting humans employed by social media platforms to be arbiters of morally acceptable content creates several problems. For starters, it forces such employees to decide what constitutes acceptable content. Second, it doesn’t specify the objective function or the basis on which such decisions should be made, leaving this up to the “judgment” of individuals. This introduces inconsistency. We are already seeing some of the fallout of the existing approach with websites that have paid Facebook over the years to build up their audiences being shuttered in an instant.
The case for using humans to draw the line is that machines just aren’t good enough for the job compared to humans at the current time. This is undoubtedly true. However, the answer isn’t to use humans as the solution, but for them to label the exemplars from which machines can learn the decision boundary.
The major risk of the data-driven approach is the possibility of perpetuating existing social biases. But as long as there is sufficient variance in the data — an essential condition and one we should expect given the diversity of views in any free society — this problem can be mitigated if we can agree on acceptable false positive of the algorithm.
These thresholds will vary by society. A society such as ours would lean on the side of free speech, which would require very low false negative rates from the algorithm since the costs of such errors should be considered very high in a free society. What this implies is that we would allow inflammatory content as long as it doesn’t cross the line into being incendiary.
It might seem paradoxical that we would consider trusting machines with decisions as fundamental to humans as free speech. But in a world where everyone — including a fake entity — has a powerful amplifier in social media, there aren’t any obvious alternatives that can handle the problem in a scalable and objective manner. Indeed, such an algorithm could become a public utility that platforms can use to preempt getting into trouble and drawing the attention of lawmakers.
At the current time, the neutrality and profit maximization objectives of social media platforms have turned their precision targeting algorithms into weapons that pose grave threats to open democracies. While the Accountable Capitalism Act proposed by Sen. Elizabeth Warren (D-Mass.) is a step in the right direction in that it introduces social obligations for business towards its employees, customers, and society, it doesn’t address the equally important issue of determining when social media platforms have crossed the line from healthy dialog to becoming weapons of mass manipulation that are exploitable by malicious actors. Intelligent algorithms can go a long way towards addressing this problem, where only the thorniest of cases are passed onto the courts for resolution.
Vasant Dhar is a professor at the NYU Stern School of Business and the Center for Data Science, and Director of the PhD program in Data Science. He was editor of the Big Data journal which published a special issue on “Fake News and Computational Propaganda” in December 2017, prior to Facebook’s public admission of misuse of its platform.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts