Advances in artificial intelligence paint a worrying picture of the rise of deepfake audio and video, but can it be stopped?
Imagine you’re holding a video conference with a colleague or business partner in another city. You’re discussing sensitive matters, like the launch of a new product or the latest unpublished financial reports. Everything seems to be going well, and you know who you’re talking to. Maybe you’ve even met them before. Their appearance and voice are as you expected, and they seem to be pretty familiar with their jobs and your business.
It might sound like a routine business call, but what if the person you thought you were talking to is actually someone else? They might seem genuine, but behind the familiar imagery and audio is a social engineering scammer fully intent on duping you into surrendering sensitive corporate information. In a nutshell, this is the disturbing world of deepfakes, where artificial intelligence is the new weapon of choice in the scammer’s arsenal.
What exactly are deepfakes?
Among the newest words on the technology block, ‘deepfake’ is a portmanteau of ‘deep learning’ and ‘fake.’ The term has been around for two years, when it first appeared on a Reddit community of the same name. The technology uses artificial intelligence to superimpose and combine both real and AI-generated images, videos and audio to make them look almost indistinguishable from the real thing. The apparent authenticity of the results is rapidly reaching disturbing levels.
One of the most famous deepfakes of all was created by actor and comedian Jordan Peele, who made this video of Obama delivering a PSA about fake news. While this one was made for the sake of humor and to raise awareness to this rapidly emerging trend, deepfake technology has, unsurprisingly, been misappropriated since the very beginning. Its implications for credibility and authenticity have placed it squarely in the spotlight.
The worrying consequences of deepfakes
Wherever there’s technology innovation, there’s nearly always pornography, so it’s little surprise that the first deepfakes to make waves on Reddit were videos which had been manipulated to replace the original actresses’ faces with somebody else’s – typically a well-known celebrity. Reddit, along with many other networks, has since banned the practice. However, as actress Scarlett Johansson said of deepfake pornography, while celebrities are largely protected by their fame, the trend poses a grave threat to people of lesser prominence. In other words, those who don’t take steps to protect their identities could potentially end up facing a reputational meltdown.
That brings me to the political consequences of deepfakes. So far, attempts to masquerade as well-known politicians have been carried out largely in the name of research or comedy. But the time is coming when deepfakes could become realistic enough to cause widespread social unrest. No longer will we be able to rely on our eyes and ears for a firsthand account of events. Imagine, for example, seeing a realistic video of a world leader discussing plans to carry out assassinations in rival states. In a world primed for violence, the implications of deepfake technology could have devastating consequences.
Purveyors of fake news seeking to make a political impact are just one side of the story. The other is the form of social engineering that business leaders are all too familiar with. As the video conference example illustrates, deepfakes are a new weapon for cybercriminals. And it’s not nearly as far away as you may think: the world’s first deepfake-based attack against a corporation was reported in August 2019, when a UK energy firm was duped by a person masquerading as the boss of its German parent company. The scammer allegedly used AI to impersonate the accent and voice patterns of the latter’s CEO, someone the victim was familiar with, over a phone call. The victim suspected nothing and was duped out of $243,000.
These uses of deepfake technology might seem far-fetched, but it’s important to remember that social engineering scammers have been impersonating people since long before the rise of digital technologies. Criminals no longer have to go to such lengths as studying targets in great depth and even hiring makeup artists to disguise themselves; they now have emerging technologies on their side, as businesses do for legitimate purposes. Previously, successfully impersonating a VIP was much more difficult. Now, the ability to create deepfake puppets of real people using publicly available photos, video and audio recordings is within everyone’s grasp.
Can you protect your business from deepfakes?
The common misassumption, that synthetic impersonation can never be nearly as convincing as the real thing, is the biggest danger of all. We live in a world where it’s getting harder to tell fact from fiction. From the hundreds of millions of fake social media profiles to the worrying spread of fake news and the constant rise of phishing attacks – it’s never been more important to think twice about what you see.
Perhaps, after all, there is a case for a return of face-to-face meetings behind closed doors when discussing important business matters. Fortunately, there are other ways you can prepare your business for the inevitable rise of deepfakes without placing huge barriers in the way of innovation.
To start with, ‘seeing is believing’ is a concept you’ll increasingly want to avoid when it comes to viewing video or listening to audio, including live broadcasts. To untrained eyes, deepfakes are getting harder to tell apart from the real thing, but there are, and likely always will be, some signs due to the fundamental way AI algorithms work. When a deepfake algorithm generates new faces, they are geometrically transformed with rotation, resizing and other distortions. It’s a process that inevitably leaves behind some graphical artifacts.
While these artifacts will become harder to identify by sight alone, AI itself can also be used as a force for good – it can detect whether a video or stream is authentic or not. The science of defending against deepfakes is a battle of wills: as deepfakes increase in believability, cybersecurity professionals need to invest more in seeking the truth.
A team of researchers in China recently published a method for using AI itself to expose deep fakes in real time. Another paper published by the same team figured out a way to proactively protect digital photos and videos from being misappropriated by deepfake algorithms by adding digital noise, invisible to the human eye. As the threat of deepfakes edges ever nearer, we can hopefully expect more countermeasures to follow suit.
This article represents the personal opinion of the author.