The telltale signs of AI-generated images, video and audio, according to experts
(NEXSTAR) – The internet is increasingly subjecting users to AI-generated images and text in advertisements, videos and social-media posts, whether many of us are aware of it or not.
But we really should be aware of it, in a perfect world.
Due to advancements in generative artificial intelligence (or generative AI, as it’s commonly called), it can become increasingly difficult to tell the difference between genuine and synthetic media. In many cases, these types of images and video are created and shared with no ill intent — but that’s not the case among the creators looking to mislead or misinform the public, or misuse AI for purposes of fraud or blackmail.
“All the media we have today, the majority, are in the digital format, and we rely on digital media to gain knowledge, information, on what is happening in the world,” Dr. Siwei Lyu, a professor at the State University of New York at Buffalo and the co-director of the school’s Center for Information Integrity, told Nexstar.
“If someone has the ability to manipulate or fabricate this media, and users cannot tell the difference … [it can] influence their decision-making, and by doing so, affect our lives and society and democracy,” Lyu said. “That’s a critical concern and threat to the well-being of everyone.”
Even with no malicious intent on behalf of the creator, the generative-AI industry is operating — and evolving — with little oversight.
“The rise of these big tech companies has created this interesting environment where anything, even faulty or if it has glitches, is put out there, because there is not a regulatory agency,” Amarda Shehu, the associate dean for AI Innovation at George Mason University’s College of Engineering and Computing, said. “The incentive to make money is so strong, the attitude is, ‘We’ll do it now and deal with what comes later.’”
Shehu is also not yet concerned about AI itself spiraling out of control; right now, it’s more an issue among the bad actors at the controls.
“We don’t need to be afraid of AI doing stupid things,” Shehu said. “We have humans for that.”
That’s not to say all generative AI is bad. The technology is brimming with the potential for beneficial applications, especially in the engineering and medical fields, Shehu told Nexstar. But without widespread oversight protecting the public from bad actors using AI tools maliciously, it’s up to us consumers to vet what we’re seeing.
So, how can we tell what’s real?
Both Lyu and Shehu are well aware of the sophisticated tools to detect deepfakes and digital forgeries, some of which are powered by AI themselves. Lyu even likened one of the tools he’s helped develop to a pet, which he trains to recognize any signs of a fake.
The experts said, however, that deepfake image generators have yet to produce “perfect” results, meaning there are usually telltale signs of fakery visible — even to the average eye.
“You could find a very realistic AI image, everything looks real … and the person [in the picture] has hands with six fingers,” Lyu said. “Or, the hands are in a configuration that is very strange, and does not look like a pose someone would make with real hands.”
Lyu also identified a giveaway often present in the eyes of some human subjects in AI-generated images.
“In the real world, when I’m looking [at something], the reflections in my two eyes will be the same, but this will not be the case with AI-generated images,” Lyu added. “The reflections may look very different, almost like they are looking at two different things.”
Shehu said a close look at any human subject’s skin may also hold the key, as some AI programs may generate mismatched skin tones or pore-free skin. These types of flaws (and others), she said, contribute to an image’s “weirdness factor,” of the degree to which it evokes an unsettling feeling.
The background of the image may have several giveaways too.
“Another thing that people have started to notice is lack of context,” Shehu said. “The background is not connected.”
In flawed images, she said, the computers might “mix and match” elements from several types of backgrounds, creating odd-looking text, inconsistent details, or an overall “lack of physics” not possible in the real world.
“No shadows, no reflections. If you see crowds of people in the background, some will be well defined, others will look like they are missing pixels,” she said.
“I call it ‘lack of coherence.’”
As for video, Lyu offers another trick for detecting deepfakes: Watch the lips.
“Notice the inconsistencies between the sound being pronounced and the movement and the shape of the mouth,” he said. “What sounds like a ‘b’ or ‘p’ or ‘m’ being pronounced, as it requires lips to be tightly closed. But deepfakes may not have that perfect synchronization of sound.”
Audio and text, Shehu said, are more difficult to pick apart, with less information available to the user. But Lyu said there’s one sound to listen for — or rather, the lack of a sound.
“One thing we notice is the lack of background sounds,” he said. “In real speech … there are pauses, and ‘uhs,’ and breathing along the way. In AI-generated voices, you do not hear all this separation. Voices sound super calm, you don’t hear the breathing sound at all.”
When it comes to text, Shehu said it may be possible for tools to detect AI-generated copy, but it’s not as reliable yet.
“I follow a lot of the AI detection tools, they have a very high false-positive rate,” she said, regarding AI and text. “They have a very hard time to detect that text is synthetic.”
At the same time, the generative AI tools are becoming better and producing more realistic media. Right now, some of these tools are being improved to specifically reduce or correct the flaws mentioned above.
“It’s a cat-and-mouse game,” Shehu said. “Whenever someone comes in and says they have a good AI detector, the next [deepfake generator] is made to evade that. Very much like in cybersecurity.”
Lyu said one of his jobs now is to make it harder and more expensive for deepfakes to be produced. But he and Shehu warn that the government should be giving more attention to the subject, or support the education sector’s efforts to mitigate any harm that may come of the new technology.
“I was never told that what we were working on could potentially harm people,” Lyu said.
“The first thing that needs to be done is to educate the next generation of science and engineering students to be more mindful of the technology they’re working on. … Even starting with very lofty intentions could have unintended effects.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts