Are deep fakes the next major cybersecurity threat?

Share on facebook
Share on twitter
Share on linkedin
Share on telegram
Deep Fakes AI Cybersecurity threat

Artificial Intelligence has been around for quite some time now. It’s one of the greatest evolutions in technology. Data science has made some exciting strides in the area of artificial intelligence in the past few years. 

While most of this research is dedicated to making the world a bit better with technology, several ethical dilemmas have been poking the brains of both scientists and the general public. 

AI can be defined as a program that mirrors the human brain to think and act in the face of problems just like humans would. However, what we have now is weak AI. Weak AI refers to the various applications of artificial intelligence in computers or mobile phones. These applications include computational tasks like analyzing data, making predictions based on the said data, image recognition, etc. 

It’s important to note that most AI today needs human supervision. Strong AI or superintelligence would be an AI that does not require any care or assistance from human beings to function. People are worried that AI will take the world by a storm and eventually change the existing ecosystem of power. These people are scared of strong AI.

Though not yet a foreseeable future, strong AI is a branch of data science known as deep learning. With deep learning, data scientists hope to create an AI that would mimic the human brain and have a sense of identity that comes from being a human. Deep learning approaches artificial intelligence by creating neural networks just the way they exist in human brains. 

Weak AI is an amalgamation of programming and application. They are programmed to perform specific tasks or solve the same problems presented to human beings but in a much faster, efficient way. Though programmed to think like human beings, strong AI is supposed to develop a consciousness over time. Several pioneers of data science fear that this can lead to a significant shift of power. 

For the general public, the fear of AI and what it might represent for the future of humanity comes from how it’s represented in the media. Many groundbreaking discoveries of artificial intelligence in science-fiction films are portrayed as the inevitable fate of doom for humanity. For example, Jarvis from Age of Ultron, Minority Report, The Terminator, Blade Runner, to name a few. 

While these movies get the general premise of AI primarily right, the sense of “doom” they try to portray is highly misguided. Max Tegmark, President of the Future of Life Institute, sheds some light on these myths by saying, “The fear of machines turning evil is another red herring. The real worry isn’t malevolence but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours.”

He gives an example of how human beings feel powerful today because we are the most intelligent of all life forms on Earth. He says, “Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial AI movement wants to avoid placing humanity in the position of those ants.”

Max debunks another myth about artificial intelligence – timeline. Most articles and movies about artificial intelligence portray superintelligence as something that will lead to a robot uprising. However, that is farther from the truth. The fact here is that nobody knows for sure how long it could take for an AI to become advanced enough to take control of humanity as we know it. 

John McCarthy, the computer scientist who came up with the term “artificial intelligence,” made what seems like a drunken prediction in 1956 about AI’s future. 

McCarthy, along with his colleagues Marvin Minsky, Nathaniel Rochester, and Claude Shannon wrote, “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College […] An attempt will be made to find how to make machines use language, form abstractions, and concepts, solve kinds of problems now reserved for humans and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

History is full of such false prophecies of advancement in technology. Some of those were accurate, most of them were full of nothing but a misguided sense of where science was taking us, and several of them were just ramblings.

Another example is Ernest Rutherford, seen as one of the most significant nuclear physicists of his time, who announced in 1933 that nuclear energy was “moonshine.” In no more than 24 hours, he was proven wrong. 

Future of Life is an organization to support the advancement of safe AI. The President, Max Tegmark, addresses another myth by saying, “The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer and that there’s no law of physics preventing us from building even more intelligent quark blobs.”

As you can see, both the extremities exist in the minds of various people across the globe. However, let’s not get carried away by foolish assumptions and look at the hard evidence.

In light of AI’s initiative to be something beneficial for human beings, Max implies that artificial intelligence could help solve major problems like world peace, war, genocide. He also expresses his concern over organizations that might want to use the development of AI for the construction of lethal autonomous weapons. 

Lethal Autonomous Weapons are weapons that would work without any supervision from humans whatsoever. Several letters and pledges against the use of lethal autonomous weapons have called upon the government to have a mindful discussion about this. 

One of the issues published by the Future of Life Institute tries to explain where the problem lies with lethal autonomous weapons, “Lethal AWS may create a paradigm shift in how we wage war. This revolution will be one of software; with advances in technologies such as facial recognition and computer vision, autonomous navigation in congested environments, cooperative autonomy, or swarming, these systems can be used in a variety of assets from tanks, ships to small commercial drones.

This brings me to acknowledge that artificial intelligence could be the harbinger of chaos one day, but that’s not anytime soon. Not to mention, it’s hardly inevitable. Let’s look at some of the real-time threats faced by the community. Again, these threats could just as well be perceived instead of being rational fears. One of the most “threatening” artificial intelligence trends that have been going on in the past few years is deep fakes. 

Deep fakes refer to one of the numerous advancements in artificial intelligence that have taken place due to deep learning applications. Deep fakes can be described as the creation of artificial audios and videos using cutting-edge AI. Many people believe that this is a grave cybersecurity threat. 

In May of 2016, Jimmy Fallon and his colleague Dion Flynn appeared as Donald Trump and Barack Obama, both former presidents of the United States. During this episode of the Tonight Show, Fallon was in an orange wig and covered in bronzer to portray Donald Trump, and Flynn played Obama in an orderly fashion. Both men were sitting side by side, facing the camera. 

In this little skit played out by Fallon’s team, Fallon pretended to call Flynn, who played Trump and Obama respectively, to brag about the Republican candidate’s win in Indiana. In March of 2019, this footage resurfaced, owing to the famous YouTube channel’s discretion known as DerpFakes. 

The footage consisted of the perfect recreation of Donald Trump’s appearance side by side with Fallon. In this video, Trump was shown mimicking Fallon’s actions and mannerisms, which was staggering for viewers worldwide. This video was titled “The Presidents” because not only did it feature a near-perfect recreation of Trump, but Obama also made an appearance. 

The video mentioned above gained millions of viewers over the next few days. As reported by The Guardian, the creator of the video says that it was “purely for laughs.” In the same article by the Guardian, the writer Simon Parkin calls this evolution of artificial intelligence a “threat to democracy.” 

Deep fakes are a computerized regeneration of someone’s appearance. This technology creates fake physical personas by using an algorithm that places one person’s skin onto another to mimic their actions in the video. In 2018, an article by the Guardian recorded the concerns of people regarding deep fake technology.

This article talked about a video posted by the Belgium political party called Socialistische Partij Anders. In this video, Donald Trump was shown advising Belgium people about the ongoing challenges of climate change. In the video, Trump said, “As you know, I had the balls to withdraw from the Paris climate agreement, and so should you.”

This video was fake. However, people on the internet could not figure that out. It led to a host of insults posted on  Facebook by people enraged about Trump’s lack of discretion about Belgium’s climate policy. In the past few years, some instances have made people question the implications of deep fake technology. 

Some people believe that such an advancement gives way to a myriad of cybercrimes. Furthermore, this increases the possibility of fake news. One of the biggest threats can be counted as the one faced by politicians. They fear that this evolution of deep fake can be used against them, so much so that they are afraid of rigged elections. 

Their fear is wholly justified. This can be seen as one of the most frightening implications of AI because it’s getting overwhelmingly arduous to figure out which videos are real and which ones are deep fakes. 

Another example of how deep fakes have caused a stir is a 2019 video of Nancy Pelosi posted by a Trump supporter. This video showed Nancy Pelosi, the US House of Representatives speaker, stammering drunkenly at a news conference. Donald Trump, president at the time, shared this video with the caption, “Pelosi stammers through a news conference.”

The video was withdrawn from social media, but it had already gained millions of views on Facebook and YouTube. Donald Trump, however, did not take his tweet down. 

Since then, several deep fakes have gone viral because of the sheer humor attached to them. But, people miss the implications of such a technology. If exploited, deep fakes can lead to a severe breach of trust in government organizations, media houses, etc. As mentioned earlier, this poses a threat to democracy and cybersecurity as we know it.

One of the most recent instances featuring deep fakes was enacted by ESPN. Last year in April, owing to the Coronavirus pandemic, sporting leagues were shut down. To make up for the loss of sports content, ESPN broadcasted their documentary series The Last Dance. The documentary was about the 1997-98 championship of the Chicago Bulls. 

The documentary began with a clip of SportsCenter footage from 1998 when Chicago Bulls had won their third consecutive championships. At first, the viewers could not understand what was happening. Then, Kenny Mayne, an ESPN analyst, was shown making shockingly accurate predictions about the year 2020, making it apparent that it was not the actual clip from 1998 but a computerized deep fake.

This led to a massive buzz on social media. While State Farm and Kenny Mayne pulled off a hilarious prank on the viewers, can we forget about the possible harmful implications of deep fake? According to CNN, there was an 84% increase in the number of deep fake videos online between December 2018 and October 2019. 

Apart from politics and media, this has scary connotations for businesses worldwide as well. Symantec, one of the most prominent cybersecurity organizations in the world, reported that three CEOs suffered substantial losses due to deep fake audio scams

According to The Verge, the chief executive of an energy company in the UK ended up sending $220,000 to a Hungarian supplier, thinking their boss had asked them to do so. A 2020 prediction by Forrester talks about how vulnerable the community is when it comes to deep fake audio and video scams. 

As far as businesses go, some of the significant threats formulated by deep fake technology are as following:

  • Deep fakes featuring clients and sponsors asking for money.
  • Deep fakes featuring supervisors and business owners asking for payment transfers and gaining access to sensitive intel that is not supposed to fall into the hands of people who would do nothing but exploit it.
  • Deep fakes mimicking IT administers to have access to company accounts and bank information. 
  • Deep fakes featuring audio and video blackmails for extortion of ransom money. 
  • Deep fakes featuring smear campaigns on social media. 

These are just some of the consequences of deep fake technology. It could lead to losses for anyone from a small business owner to a multi-national company with the best security services around. 

We’ve talked about the extravagance of threats posed by artificial intelligence’s general advancement and how most of them are myths. But, the deep fake threats staring at billions of people in their faces are very much real. Not only is this scary, but if these deep fake scams garner enough money from businesses, it can mean vastly staggering consequences. 

As we can see, businesses, media, politicians, and social media are some of society’s pillars that need to be preserved from deep fake scams. These are significantly real threats, especially if you compare them with the bogus claims made by popular articles on artificial intelligence. 

Doomsday may not exist some time far away into the future. If given the right resources, it could be sitting in your building creating deep fake scams to rob people of millions of dollars. Therefore, it’s important to stop obsessing over perceived threats posed by artificial intelligence and instead focus on what people are doing with deep fakes. 

These concerns call for some serious actions that may not be solved by social media executives banning or taking down a false claim. It requires concrete cybersecurity conduct so that the general public does not face any exploitation.

We can always sit back and think of artificial intelligence as something that will lead to a war between humans and robots based on the misinformation spread by science-fiction films. But, the real threat at the moment lies in the harmful ways humans are using artificial intelligence. This threat is much more accurate than the perceived one living in our heads. 

Michael - Cyber Security Herald

Michael Inglis

CISSP, CEH, BSc, MCSE, AWS SAA - Cyber Security Specialist with over 20 years of experience in IT and Cyber Security. Providing global cybersecurity news, analysis, and research.

Scroll to Top