Story at a glance
- Deepfakes are among the many pressing issues facing cybersecurity experts, journalists and politicians.
- Using cloned audio and video, these artificial intelligence-generated images are difficult to discern from real ones.
- But new research shows our brains might be up for the challenge.
A rise in digital propaganda distributed by malicious actors has prompted concern as to whether the general public will be able to discern reality from fake images or videos.
In particular, deepfakes, or computer-generated video or audio clones of individuals, pose a threat to news organizations, cybersecurity officials and border patrol workers alike.
But new research out of the University of Sydney suggests that even though humans may not be able to consciously tell which faces are real or fake, people’s neural activity can.
Using behavior and brain imaging techniques including Electroencephalography (EEG), researchers found human brains encode and interpret realistic artificially generated images in a manner different from real-life identification. EEG is a test that shows activity occurring on the brain’s surface layer.
Study participants’ brains were able to identify deepfakes 54 percent of the time, as determined via neural activity, while participants could only verbally identify deepfakes 37 percent of the time.
America is changing faster than ever! Add Changing America to your Facebook or Twitter feed to stay on top of the news.
“Although the brain accuracy rate in this study is low – 54 percent – it is statistically reliable,” said Thomas Carlson, of the University’s School of Psychiatry in a press release. “That tells us the brain can spot the difference between deepfakes and authentic images.”
Individuals were shown 50 images of real and computer-generated faces in a behavioral experiment and asked to verbally identify each image’s veracity. A second study cohort was shown the same images while being monitored by EEG, unaware half of the images were fake.
Researchers found that at a group level, participants “tended to interchange the labels, classifying real faces as realistic fakes and vice versa,” they wrote.
Additional research targeted at understanding the discrepancy between neural responses and conscious identification could help in the fight against deepfakes, they continued, and might one day be translated into algorithms to flag the fakes on digital platforms.
The data also show that despite their surface-level similarities, current deepfakes are flawed.
In the future, EEG helmets could help officials detect scammers in real-time. Carlson went on to detail a recent scenario where cloned voice technology was used to steal tens of millions of dollars in Dubai.
“In these cases, finance personnel thought they heard the voice of a trusted client or associate and were duped into transferring funds,” Carlson said.
However, because the study only marks a starting point in the field, authors cautioned the research may not ever lead to a foolproof mechanism of deepfake detection.
“More research must be done. What gives us hope is that deepfakes are created by computer programs, and these programs leave ‘fingerprints’ that can be detected,” Carlson said.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Changing america