Home > Cyber News > Deepfake Tech – How to Deal with This Growing Threat

Deepfake Tech – How to Deal with This Growing Threat

Who would’ve thought that the face swap app we all have been enjoying on social media could be a tool that messes another person’s entire life or even a whole nation?

Not everything we see is real – these lines have become more valid than ever now that deepfakes are posing terrifying threats to everyone with data published online. From the simple face swap app that we’ve been playing around with in the past years, it has now developed into a flawless faking technology that produces believable results.

In fact, a newly launched Chinese face swap app called Zao proved how deepfake tech has tremendously improved. The outputs are so convincing that a human eye will be easily deceived by any photo or video it creates.

Despite the breakthrough in its impeccable image manipulation, Zao received massive backlash as soon as it reached global popularity. This was after it was found out to have violated certain privacy laws that placed its users in compromising situations.

However, the threats presented by deepfake go beyond mere privacy scare. A bigger concern that comes with this technology is the high probability that it can become a powerful weapon that influences public opinion. Politics and democracy are pointed out as the biggest targets of deepfake users with malicious intent. Reaching perfection, experts of cybersecurity also see deepfake as an unbeatable source of cybercrime in the future.

Early Manifestations of Deepfake Technology

The deepfake tech or AI-synthesized media was initially used in academic researches as a part of Computer Vision studies. One of the first projects done for this purpose was in 1997 where a video footage of a speaking person was modified to make him look like he was speaking the contents of a separate audio track.

Fast forward to 2017, a Redditor originated the term “deepfake” and posted videos of several Hollywood stars like Scarlet Johansson, presenting her in compromising sexual scenes. Apparently, it was found out that the video was manipulated, pasting the face of an actress to the body of a porn star.

Another video done to prove how deepfake is maturing rapidly was the photorealistic footage of the seemingly former President Barrack Obama delivering a speech. It was a part of the “Synthesizing Obama” project launched by the University of Washington where lip syncing has gone to a whole new level.

One of the lead researchers of the project admitted how this technology can be used in a negative way and suggested people to be responsible in using what they developed. She also mentioned methods of reverse engineering the technology to determine an edited video versus the real one.

But are there really ways to reverse engineer deepfake? How can an ordinary person with no access to any AI machine know what’s real and what’s not?

The Scarier Threat Created by Deepfake

We already know that deepfake can make a fake video look real. It can damage a person’s reputation when scandalous fake photos and videos are released online. Furthermore, it can be used to alter evidences to pin a crime to another person.

However, a more tangible threat caused by deepfake is how it makes the real seem fake. An example of this was the Gabon Central Africa military coup in 2018. The people of the country reportedly haven’t seen their president, Ali Bongo, for months and in a surprising event, he delivered his customary New Year’s speech.

Political rivals insinuated that the footage was a product of deepfake and that the government was masking the president’s ailment. A week after, a military coup was launched that led to killings. This is just one unfortunate incident proving a more serious risk posed by deepfake.

Related: The Future of AI-Based Security Solutions in the Cybersecurity Industry

Deepfake on Cybersecurity

Digital security firm Symantec has previously pointed out three incidents of deepfake voice fraud, which all happened earlier in 2019. In the post, it was cited how Euler Hermes lost at least $10m to a thief whom he thought was the boss of one of their suppliers.

AI, while being a useful tool for determining incoming cyberattacks and minimizing cyberthreats, apparently can also be a technology that puts a company at a brink. Cybersecurity firms believe that this new innovation will soon be used to up the game of cybercriminals.

To prevent that, each enterprise and every individual – as they will be more vulnerable to deepfake frauds – should make a plan on how to determine and prevent attacks.

Dealing with Deepfake Technology

Some companies are still in denial of the risks that deepfake technology can pose to their business. They believe that the tech is still premature, and the scare is just caused by the hype.

Given that there are already confirmed victims of deepfake fraud, cybersecurity companies are more convinced than ever that prompt response to the rise of complex deepfake methods should be implemented.

There are several recommendations shared by academics and cybersecurity experts. Some of them include the following:

1. Use of AI for Deepfake Detection

Deepfake media are created using artificial intelligence, so experts suggest that reverse engineering the method will also involve AI.

There’s now a few companies developing AI focused analyzing media and detecting discrepancies on photos, videos, and audio files. One of the tell-tale signs of fake media includes inconsistent pixels around a subject’s mouth, or inconsistencies in the generation of shadows or the angles of a person’s face.

However, this isn’t conclusive evidence of a deepfake.

While developers can come up with accurate detection methods, they are still waiting for a bigger data base they can use to build a detection model.

This leads us to Google’s latest contribution to managing deepfake threats.

2. Google’s Collection of Deepfake

There’s a reason why Google is deemed the king of search engines. It has access to almost all media published online, and that includes deepfake files.

While it’s not a direct solution to stop deepfake attacks, Google’s gallery of deepfake media serves as a big help to developers finding basis for their AI detection machines. The tech giant released over 3000 synthesized videos to help accelerate the creation of deepfake detection tools.

Facebook also released a statement that it will come up with a similar database. When face swap apps were firsts released online, many have brought their resulting photos to social media and Facebook saw the most massive volume of those deepfakes. Besides, Facebook is also one of the originators of deepfakes through its camera filters.

3. 3D Printed Glasses and Watermarks

Symantec revealed that it is exploring the possibility of specialized 3D glasses to detect deepfake. It is somehow similar to the technology presented by mobile phones on how to prevent tricking their facial recognition feature for phone lock and unlock.

Meanwhile, start-ups ProofMode and Truepic use stamps photos with a watermark to indicate authenticity. Major chipmaker Qualcomm has already developed software to implement the method in mobile phones.

4. Verification Codes

It isn’t clear yet how codes can work in detecting deepfake but Accenture thinks responsible use of AI is a way to prevent probable threats. They suggest people publishing code for creating deepfakes to include verification measures. However, it’s obvious that cybercriminals won’t ever comply with this.

Related: Intel and Facebook Working on AI Inference Chip Called Nervana


Deepfake is relatively a new technology that opened new windows of opportunity for cybercriminals to win over their prey. Developers were quick to create tools that can produce media that sow confusion but they weren’t prepared for the consequences.

Are we going to see a day where we’ll need to scan photos for verification? Will there be a time in the future where we we’ll doubt everything we see on T.V. including the news?

Technology to counter the inevitable threats can easily be worked on. The real problem now is spreading awareness and instilling social responsibility in using AI manipulation.

About the Author: John Ocampos

John is an Opera Singer by profession, and a member of the Philippine Tenors. Ever since, Digital Marketing has always been his forte. He is the CEO of MegaMedia Internet Advertising Inc, and the Managing Director of Tech Hacker. John is also the current SEO Manager of Softvire New Zealand and Softvire Australia – the leading software ecommerce company in Australia. Follow John @iamJohnOcampos

SensorsTechForum Guest Authors

The opinions expressed in these guest posts are entirely those of the contributing author, and may not reflect those of SensorsTechForum.

More Posts

Leave a Comment

Your email address will not be published. Required fields are marked *

This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.
I Agree