U.K.-based fact-checking startup Logically launched a new service Monday aimed at helping governments and nongovernmental organizations identify and counter online misinformation using a blend of artificial intelligence and human expertise.
The Logically Intelligence (LI) platform collects data from tens of thousands of websites and social media platforms then feeds it through an algorithm to identify potentially dangerous content and organize it into narrative groups.
“Over the last few years, the phenomenon of mis- and disinformation has firmly taken root, evolved and proliferated, and is increasingly causing real world harm,” Lyric Jain, founder and CEO of Logically, said. “Our intensive focus on combating these untruths has culminated in the development of Logically Intelligence, based on several years of frontline operations fighting against the most egregious attacks on facts and reality.”
The company views the service as a way to help institutions, including social media platforms, to be quicker to react to burgeoning misinformation narratives.
Jain told The Hill that he hopes the platform will help information and intelligence sharing in the wake of the deadly insurrection at the Capitol earlier this year, which was planned in publicly-accessible online spaces but was seemingly missed by some authorities.
“We think it’s a really good time for us to be able to empower … individuals to national governments with something like Logically Intelligence,” he said in an interview, noting that the service could also help identify drivers behind coronavirus vaccine hesitancy.
LI provides users with a customizable “Situation Room” that organizes potentially dangerous pieces of content and shows links between them. For example, the platform could chart how a particular concept traveled from a fringe platform to a mainstream social media site, helping the user figure out to block off falsehoods before they proliferate.
It also identifies inauthentic accounts and can potentially be used to locate networks of them.
Logically touts its artificial intelligence and team of expert researchers as a differentiating factor that will help it be quicker and better at organizing content in useful ways.
“What separates this from traditional social monitoring and internet monitoring tools is we then use all of the learning that we’ve done in terms of our artificial intelligence model, everything we’ve learned from our consumers products and projects we’ve worked on previous to this to then classify that content,” global head of product Joel Mercer explained.
The platform also offers users several countermeasures once misinformation narratives have been detected, including investigative reports from Logically’s subject-matter experts, ways to flag content to platforms and built-in fact checks.
LI has been tested with some government agencies over the last year. The company worked with an undisclosed battleground state during the 2020 American election to identify misinformation and coordinated activity that might hurt election integrity.
It helped the state, according to Logically, push back on the false narratives by figuring out who was being targeted and boosting true information contradicting them through trusted local officials.
The company has built in safeguards aimed at ensuring the LI platform is not misused. The company has a list of permissible use cases and plans to monitor how the tool is being applied.
Logically, which was founded in 2017, has previously worked on a service focused on fact-checking news. It also produces research on misinformation, like the QAnon conspiracy theory.