{"id":28954,"date":"2023-07-16T04:14:40","date_gmt":"2023-07-16T08:14:40","guid":{"rendered":"https:\/\/www.kaspersky.com\/blog\/?post_type=emagazine&#038;p=28954"},"modified":"2023-10-20T05:19:47","modified_gmt":"2023-10-20T09:19:47","slug":"deepfakes-2019","status":"publish","type":"emagazine","link":"https:\/\/www.kaspersky.com\/blog\/secure-futures-magazine\/deepfakes-2019\/28954\/","title":{"rendered":"What does the rise of deepfakes mean for the future of cybersecurity?"},"content":{"rendered":"<p>Imagine you\u2019re holding a video conference with a colleague or business partner in another city. You\u2019re discussing sensitive matters, like the launch of a new product or the latest unpublished financial reports. Everything seems to be going well, and you know who you\u2019re talking to. Maybe you\u2019ve even met them before. Their appearance and voice are as you expected, and they seem to be pretty familiar with their jobs and your business.<\/p>\n<p>It might sound like a routine business call, but what if the person you thought you were talking to is actually someone else? They might seem genuine, but behind the familiar imagery and audio is a social engineering scammer fully intent on duping you into surrendering sensitive corporate information. In a nutshell, this is the disturbing world of deepfakes, where artificial intelligence is the new weapon of choice in the scammer\u2019s arsenal.<\/p>\n<h2>What exactly are deepfakes?<\/h2>\n<p>Among the newest words on the technology block, \u2018deepfake\u2019 is a portmanteau of \u2018deep learning\u2019 and \u2018fake.\u2019 The term has been around for two years, when it first appeared on a Reddit community of the same name. The technology uses artificial intelligence to superimpose and combine both real and AI-generated images, videos and audio to make them look almost indistinguishable from the real thing. The apparent authenticity of the results is rapidly reaching disturbing levels.<\/p>\n<p><span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe class=\"youtube-player\" type=\"text\/html\" width=\"640\" height=\"390\" src=\"https:\/\/www.youtube.com\/embed\/cQ54GDm1eL0?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent\" frameborder=\"0\" allowfullscreen=\"true\"><\/iframe><\/span><\/p>\n<p>One of the most famous deepfakes of all was created by actor and comedian Jordan Peele, who made this video of <a href=\"https:\/\/www.theverge.com\/tldr\/2018\/4\/17\/17247334\/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed\" target=\"_blank\" rel=\"noopener nofollow\">Obama delivering a PSA about fake news<\/a>. While this one was made for the sake of humor and to raise awareness to this rapidly emerging trend, deepfake technology has, unsurprisingly, been misappropriated since the very beginning. Its implications for credibility and authenticity have placed it squarely in the spotlight.<\/p>\n<h2>The worrying consequences of deepfakes<\/h2>\n<p>Wherever there\u2019s technology innovation, there\u2019s nearly always pornography, so it\u2019s little surprise that the first <a href=\"https:\/\/www.bbc.com\/news\/technology-42912529\" target=\"_blank\" rel=\"noopener nofollow\">deepfakes to make waves on Reddit<\/a> were videos which had been manipulated to replace the original actresses\u2019 faces with somebody else\u2019s \u2013 typically a well-known celebrity. Reddit, along with many other networks, has since banned the practice. However, <a href=\"https:\/\/www.washingtonpost.com\/technology\/2018\/12\/31\/scarlett-johansson-fake-ai-generated-sex-videos-nothing-can-stop-someone-cutting-pasting-my-image\/\" target=\"_blank\" rel=\"noopener nofollow\">as actress Scarlett Johansson said of deepfake pornography<\/a>, while celebrities are largely protected by their fame, the trend poses a grave threat to people of lesser prominence. In other words, those who don\u2019t take steps to protect their identities could potentially end up facing a reputational meltdown.<\/p>\n<p>That brings me to the political consequences of deepfakes. So far, attempts to masquerade as well-known politicians have been carried out largely in the name of research or comedy. But the time is coming when deepfakes could become realistic enough to cause widespread social unrest. No longer will we be able to rely on our eyes and ears for a firsthand account of events. Imagine, for example, seeing a realistic video of a world leader discussing plans to carry out assassinations in rival states. In a world primed for violence, the implications of deepfake technology could have devastating consequences.<\/p>\n<p>Purveyors of fake news seeking to make a political impact are just one side of the story. The other is the form of social engineering that business leaders are all too familiar with. As the video conference example illustrates, deepfakes are a new weapon for cybercriminals. The world\u2019s first deepfake-based attack against a corporation was <a href=\"https:\/\/www.wsj.com\/articles\/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402\" target=\"_blank\" rel=\"noopener nofollow\">reported in August 2019<\/a>, when a UK energy firm was duped by a person masquerading as the boss of its German parent company. The scammer allegedly used AI to impersonate the accent and voice patterns of the latter\u2019s CEO, someone the victim was familiar with, over a phone call. The victim suspected nothing and was duped out of $243,000.<\/p>\n<p>These uses of deepfake technology might seem far-fetched, but it\u2019s important to remember that social engineering scammers have been impersonating people since long before the rise of digital technologies. Criminals no longer have to go to such lengths as studying targets in great depth and even <a href=\"https:\/\/www.theguardian.com\/world\/2019\/mar\/28\/conmen-made-8m-by-impersonating-french-minister-israeli-police\" target=\"_blank\" rel=\"noopener nofollow\">hiring makeup artists to disguise themselves<\/a>; they now have emerging technologies on their side, as businesses do for legitimate purposes. Previously, successfully impersonating a VIP was much more difficult. Now, the ability to create deepfake puppets of real people using publicly available photos, video and audio recordings is within everyone\u2019s grasp.<\/p>\n<h2>Can you protect your business from deepfakes?<\/h2>\n<p>The common misassumption, that synthetic impersonation can never be nearly as convincing as the real thing, is the biggest danger of all. We live in a world where it\u2019s getting harder to tell fact from fiction. From the hundreds of millions of fake social media profiles to the worrying spread of fake news and the constant rise of phishing attacks \u2013 it\u2019s never been more important to think twice about what you see.<\/p>\n<p>Perhaps, after all, there is a case for a return of face-to-face meetings behind closed doors when discussing important business matters. Fortunately, there are other ways you can prepare your business for the inevitable rise of deepfakes without placing huge barriers in the way of innovation.<\/p>\n<p>To start with, \u2018seeing is believing\u2019 is a concept you\u2019ll increasingly want to avoid when it comes to viewing video or listening to audio, including live broadcasts. To untrained eyes, deepfakes are getting harder to tell apart from the real thing, but there are, and likely always will be, some signs due to the fundamental way AI algorithms work. When a deepfake algorithm generates new faces, they are geometrically transformed with rotation, resizing and other distortions. It\u2019s a process that inevitably leaves behind some graphical artifacts.<\/p>\n<p>While these artifacts will become harder to identify by sight alone, AI itself can also be used as a force for good \u2013 it can detect whether a video or stream is authentic or not. The science of defending against deepfakes is a battle of wills: as deepfakes increase in believability, cybersecurity professionals need to invest more in seeking the truth.<\/p>\n<p>A team of researchers in China recently <a href=\"https:\/\/arxiv.org\/abs\/1811.00661\" target=\"_blank\" rel=\"noopener nofollow\">published a method<\/a> for using AI itself to expose deep fakes in real time. <a href=\"https:\/\/arxiv.org\/abs\/1906.09288\" target=\"_blank\" rel=\"noopener nofollow\">Another paper<\/a> published by the same team figured out a way to proactively protect digital photos and videos from being misappropriated by deepfake algorithms by adding digital noise, invisible to the human eye. As the threat of deepfakes edges ever nearer, we can hopefully expect more countermeasures to follow suit.<\/p>\n<p><em>This article represents the personal opinion of the author.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Advances in artificial intelligence mean deepfake audio and videos are becoming more convincing. Read this to keep one step ahead. <\/p>\n","protected":false},"author":2703,"featured_media":49368,"template":"","coauthors":[4311],"class_list":{"0":"post-28954","1":"emagazine","2":"type-emagazine","3":"status-publish","4":"has-post-thumbnail","6":"emagazine-category-emerging-tech","7":"emagazine-category-future-tech","8":"emagazine-category-small-business","9":"emagazine-category-trends","10":"emagazine-tag-deepfakes","11":"emagazine-tag-social-engineering"},"hreflang":[{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/secure-futures-magazine\/deepfakes-2019\/28954\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/secure-futures-magazine\/deepfakes-2019\/21932\/"}],"acf":[],"_links":{"self":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/emagazine\/28954","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/emagazine"}],"about":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/emagazine"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2703"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/49368"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=28954"}],"wp:term":[{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/coauthors?post=28954"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}