The views expressed by contributors are their own and not the view of The Hill

How social media fuels U.S. political polarization — what to do about it

Tech giants may rely more on European Union laws than U.S. laws to police what happens on their sites.

When Speaker Nancy Pelosi (D-Calif.) instructed the House select committee investigating the Jan. 6 insurrection to determine what role social media played in fostering the attack on the Capitol, she implicitly embraced a widely held assumption: that the major tech platforms have fueled the acute level of political polarization in the U.S. 

Facebook begs to differ. Since Jan. 6, the largest social media site has sought to discredit what it calls “an albatross public narrative” that is contributing to partisan hatred. In March, Nick Clegg, the company’s vice president for global affairs, took on that narrative in an article on Medium, while in congressional testimony, Facebook founder and CEO Mark Zuckerberg blamed the media and political elites for causing division, saying “technology can help bring people together.”  

In a new report from the New York University Stern Center for Business and Human Rights, we challenge Facebook’s disavowals and show that while the use of social media may not create partisan divisiveness, it does exacerbate it. An understanding of this connection is vital to preventing a repeat of Jan. 6 and regulating online platforms responsibly. This will only become more urgent as the country turns its attention to elections in 2022 and beyond. 

For starters, we can all agree that social media doesn’t bear sole responsibility for partisan animosity. Polarization began growing in the U.S. decades before Facebook, Twitter and YouTube appeared. Other factors — including the resorting of political party membership, the rise of hyper-partisan media outlets, and Donald Trump’s hugely divisive influence  — have contributed to the phenomenon. 

But that doesn’t mean tech platforms don’t play a role. In an article just published in August in the journal Trends in Cognitive Sciences, five researchers from NYU and Cambridge University summed up their far-reaching review of the empirical evidence: “Although social media is unlikely to be the main driver of polarization,” they concluded, “we posit that it is often a key facilitator.” That echoed the view of a separate group of 15 researchers who synthesized study results in a jointly authored article in Science in October 2020: “In recent years,” they wrote, “social media companies like Facebook and Twitter have played an influential role in political discourse, intensifying political sectarianism.”  

Consider also a March 2020 paper describing subjects who stopped using Facebook for a month. Staying off the platform “significantly reduced polarization of views on policy issues,” researchers found, although it didn’t diminish divisiveness based strictly on party identity. “That’s consistent with the view that people are seeing political content on social media that does tend to make them more upset, more angry at the other side [and more likely] to have stronger views on specific issues,” Matthew Gentzkow, a Stanford economist and co-author of the study, told us in an interview. 

Other research might appear at first glance to raise questions about the relationship between social media and polarization. A 2017 study found that from 1996 to 2016, polarization rose most sharply among Americans 65 and older — the demographic least likely to use social media. A 2020 paper compared polarization levels in the U.S. over four decades to those in eight other developed democracies that experienced smaller increases in divisiveness or saw polarization decrease. Given the global nature of the internet and social media, these variations by country suggest that, over the long term, factors other than social media have driven polarization in America.  

But notice that both the age-group and inter-country comparisons spanned decades, including extended stretches of time before the emergence of social media. These long-term studies don’t speak directly to the drivers of partisan animosity in very recent years, especially since Trump’s election in 2016. More recent snapshots of the U.S. are thus more relevant. A paper published in March, based on a study of more than 17,000 Americans, found that Facebook’s content-ranking algorithm may limit users’ exposure to news outlets offering viewpoints contrary to their own — and thereby increase polarization.  

The very design of the automated systems that run the platforms is responsible for the amplification of divisive content. “Social media technology employs popularity-based algorithms that tailor content to maximize user engagement,” the co-authors of the Science paper wrote. Maximizing engagement increases polarization, they added, especially within “homogeneous networks,” or groupings of like-minded users. This is “in part because of the contagious power of content that elicits sectarian fear or indignation.”  

Some polarization is inevitable. Movements for social and racial justice that benefit society often provoke divisions, for instance. But at present, the malign consequences of heightened partisan animosity are playing out primarily, though not exclusively, on the political right. These consequences including the erosion of trust in elections, the triumph of conspiracy theories over facts and an increase in political violence like the Capitol insurrection — imperil our democracy and therefore affect all Americans. Reforming social media, by itself, won’t cure these dangerous tendencies, but there are steps that would help.  

First, President Biden should use his bully pulpit to focus national attention on the role social media plays in fostering partisan hatred. The White House then ought to work with Congress to produce legislation that would empower the Federal Trade Commission to draft and enforce a new set of standards for industry conduct. The standards should not give the government the power to dictate or veto content but instead to require a far greater degree of transparency by the tech platforms, including detailed disclosure of how their algorithms rank, recommend and remove content. Only with this kind of insight can researchers, policymakers and the public understand the pathologies associated with social media and come up with effective remedies. 

Acting on their own, social media companies should deploy their engineers and data scientists to figure out which platform features tend to heighten polarization so they can be modified. We know this is feasible because the platforms already adjust their automated systems on a temporary basis during periods of social or political unrest. If those adjustments are succeeding in tamping down incendiary content, they ought to be applied more generally—although with careful human oversight to avoid the excessive removal of legitimate debate. To reduce incentives for pushing content designed to provoke outrage and hatred, the platforms should also experiment with obscuring “like” and share counts, which would encourage consideration of content on its substantive merits rather than on whether it has “gone viral.” 

To move ahead with any of these improvements, it will be necessary to push past industry obfuscation and recognize that social media is fanning the flames of political polarization.  

Paul Barrett is the deputy director of the NYU Stern Center for Business and Human Rights. Grant Sims is a research fellow at the NYU Stern Center for Business and Human Rights. Justin Hendrix is the founder and editor of Tech Policy Press.