Technology

Almost half of accounts sharing coronavirus tweets are likely bots: researchers

Almost half of Twitter accounts sharing coronavirus tweets are likely bots, according to Carnegie Mellon University research released Wednesday. 

The university’s researchers scoured through more than 200 million tweets talking about coronavirus or COVID-19 since January and concluded nearly 45 percent of the accounts behaved like robots instead of humans, NPR reported. The study found that of the top 50 influential retweeters, 82 percent are likely bots, and of the top 1,000 retweeters, 62 percent are likely bots. 

“We’re seeing up to two times as much bot activity as we’d predicted based on previous natural disasters, crises and elections,” Kathleen Carley, a professor in the School for Computer Science at Carnegie Mellon, said in a statement.

The team utilized a bot-hunter tool to find accounts that show signs of being run by a computer by tweeting more than humanly possible or being in multiple countries in a few hours. They also analyzed the account’s followers, frequency of tweeting and how often the user is mentioned on the platform. 

More than 100 inaccurate COVID-19 narratives were found, including those about potential cures, but the likely bots are involved in discussions about reopening and ending stay-at-home orders. Some of the accounts tweet about conspiracy theories, like hospitals being filled with mannequins or the coronavirus spreading through 5G wireless towers, which are both untrue.

But researchers said it was too early to determine what individuals and groups were behind the likely bots, but the tweets seem to instigate division in the U.S.  

“We do know that it looks like it’s a propaganda machine, and it definitely matches the Russian and Chinese playbooks, but it would take a tremendous amount of resources to substantiate that,” Carley said.

A Twitter spokesperson responded to the research by citing a blog post from Twitter’s Nick Pickles, the global policy strategy and development director, and Yoel Roth, head of site integrity. The Monday blog post said the word bot can describe a wide variety of behaviors on the platform that are not all violations of the rules.

“People often refer to bots when describing everything from automated account activity to individuals who would prefer to be anonymous for personal or safety reasons, or avoid a photo because they’ve got strong privacy concerns,” the post said. 

The term could also be used “more worryingly, as a tool by those in positions of political power to tarnish the views of people who may disagree with them,” it notes. Users should focus on the “holistic behavior” of an account and “not just whether it’s automated or not.”

“That’s why calls for bot labeling don’t capture the problem we’re trying to solve and the errors we could make to real people that need our service to make their voice heard,” the blog post said. “It’s not just a binary question of bot or not — the gradients in between are what matter.”

The spokesperson said Twitter has gotten rid of thousands of tweets with misleading and potentially harmful information about the coronavirus. 

Last week, Twitter released new labels that will be posted with misleading, disputed or unverified tweets about COVID-19 in an effort to prevent misinformation and disinformation.

Researchers are still determining where the bots are being developed, but some reports have suggested Russia is involved, as a Reuters report indicated the Russian media released disinformation to increase the negative effects of coronavirus panic and distrust in the U.S.

The U.S. intelligence community concluded that Russia interfered with the 2016 presidential election, and the 2020 election is just around the corner in a few months.

— Updated May 22, 10:00 a.m.