The views expressed by contributors are their own and not the view of The Hill

The age of deepfake: When seeing is no longer necessarily believing


Seeing is believing. Except when you can’t trust what you see anymore — something that happened to much of America this past weekend.

The current artful term for stories that people believe are contrived is “fake news.” In fact, as early as 1845, author Edgar Allen Poe wrote in his short story, “The System of Dr. Tarr and Prof. Fether,” the well-known phrase, “Believe nothing you hear, and only one half that you see.” Soon, we may not be able to believe anything of what we see, either.

It’s one thing to deliberately manipulate perception through editing. The art of creating deceptive conclusions that way has only been amplified by the light-speed news cycle and the ubiquity of social media. Witness the latest controversy involving students from Covington (Ky.) Catholic High School. The allegations and retractions appear to be occurring simultaneously. The discovery of other video that showed the entire encounter kicked off apologetic backtracking in some circles and provided an entirely different context. Yet, at that point, it didn’t seem to matter.

While that kind of deception is bad, it’s another thing entirely to create the deception with such authenticity that it nearly precludes the possibility of evidence to the contrary.

Enter Artificial Intelligence (AI) and what is known as “deepfake” videos. The term “deepfake” is a synthesis of two words: deep learning and fake. Deep learning allows AI to understand things about you, like your face and body, from readily available images and video. Using a machine-learning technique called a Generative Adversarial Network (GAN), you only need to combine the source video and imagery into a targeted video to be used for the fake.

Deepfakes are a recent phenomenon, emerging in 2017 as a threat initially involving the porn industry; unfortunately, Hollywood actresses like Scarlett Johansson were too easy of a target for deepfake porn videos. By the very nature of her work, there is a tremendous number of images and hundreds of hours of video with which to work. The result is a product that soon will be indistinguishable from the real thing. But this is only the beginning.

The power of AI was once only reserved for nation-states and well-funded research institutions. It required massive computing power and storage. No more. Like any technology, from mobile phones to televisions, it’s gotten cheaper and easier to acquire the capability.

This escalation has gotten the attention of our Department of Defense, specifically the Defense Advanced Research Projects Agency (DARPA). They’ve even been holding contests to create the best deepfake videos on the planet in order to test their new tools for ferreting out AI-generated photos, videos and even audio recordings.

In June of 2018, I wrote about the role of government robots and chatbots. Google’s demonstration of Duplex, “an AI system for accomplishing real-world tasks over the phone,” could be the customer-service killer app for which the government has been waiting. The all-too-real voice fooled everyone on the other end of the line.

Making computers sound and respond like humans is within our reach. Imagine making them look like humans as well.

Lawmakers are rightfully concerned about deepfakes. Even they have a hard time distinguishing between fact and fiction. Imagine escalating this to a new level of malign influence-operations by the Russians. Not only would there be fake posts, but authentic-looking forged videos to make viewers dismiss their initial reluctance in discounting a story.

The consequences can have very deadly implications. In an already-tense Middle East, how would the terrorist group Hamas respond to a video of Israeli Prime Minister Benjamin Netanyahu appearing to burn a Quran? Would it matter that it didn’t happen? How would Netanyahu prove he didn’t do it?

The potential scenarios are too numerous to count. For example, Chinese intelligence officers might show a potential espionage target a video of his or her family members waiting to be executed unless top-secret information was divulged. Simulating the death of key figures has always been a challenge. Deepfake videos will be able to move people beyond their initial skepticism and shape future responses. 

During a time of war, knowing which message to believe and which to ignore will have profound consequences. The possibility that a faked video of a military leader giving contradictory information to troops in the field might give an adversary the slight edge they need to win an engagement.

The greatest impact of deepfake videos isn’t the fact that they are untrue. It will be the effect the deception creates that remains, long after the truth has been discovered.

This exact scenario and potential consequences were actually contemplated in 1710 by Jonathan Swift, author of “Gulliver’s Travels” and the foremost satirist of his time. Writing for The Examiner, he noted, “Few lies carry the inventor’s mark, and the most prostitute enemy to truth may spread a thousand, without being known for the author: besides, as the vilest writer hath his readers, so the greatest liar hath his believers: and it often happens, that if a lie be believed only for an hour, it hath done its work, and there is no farther occasion for it. Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect.”

So, are you going to believe me — or your own eyes?

Morgan Wright is an expert on cybersecurity strategy, cyberterrorism, identity theft and privacy. He previously worked as a senior adviser in the U.S. State Department Antiterrorism Assistance Program and as senior law enforcement adviser for the 2012 Republican National Convention. Follow him on Twitter @morganwright_us.

Tags Academic disciplines Articles Artificial intelligence Benjamin Netanyahu Computer graphics Deception Deepfake Fake News Person Career Pornography Quotation

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴

THE HILL MORNING SHOW

Main Area Bottom ↴

Most Popular

Load more