December 13, 2022

Here’s what you need to know about deepfakes

Uncovering the Truth Behind Deepfakes: Understanding the Technology and Its Impact on Society

Deepfakes are a form of media manipulation that uses advanced artificial intelligence (AI) to create realistic images or videos of individuals that make them look like someone else. Bringing the generation of fake content to another level. With deepfake technology increasingly advancing, more people are able to create deepfakes effortlessly. Although deepfake on its own is not dangerous, it is the misuse of it that makes it challenging to be able to believe what we perceive. It is essential to be aware of the potential harm of deepfake technology and to use deepfake detection techniques to verify the authenticity of the images and videos.

Deepfakes, or synthetic media created using artificial intelligence, have become a significant concern in recent times. With the rise of deepfake technology, it has become easier for anyone to create fake videos that can cause irreparable damage. In this article, we will provide a comprehensive guide to detecting deepfakes and discuss the best practices to protect yourself and your organization from them.

In this article, we will cover all you need to know about deepfakes including:

  • How are deepfakes created?
  • Why are deepfakes dangerous and how they pose as a cybersecurity threat?
  • How to spot deepfakes?

How deepfakes are created?

An infographic on how deepfakes are created

Deepfakes are created using data, including images, videos or audio recordings of a target, to train the machine learning model. Once trained, the model can create new media that looks real, even when it is fake. The technology behind deepfakes is continuously advancing, and as it becomes more accessible, the potential for misuse rises.

1. Autoencoders 

There are two ways to create deepfakes: autoencoders and GANs. Autoencoders are unsupervised learning techniques that are used to shrink the size of data by recreating it. Creating a deepfake is simply any video to serve as the deepfake's foundation, and a single image of the target subject is required. However, the deepfake quality becomes better when multiple images or videos of the target subject are used. The autoencoder is then used to analyse the videos to learn the person’s facial features from different angles and surroundings. Then, it places that person onto the individual from the foundation video by looking for similar traits. By doing so, a person can create a video of, for example, a celebrity or politician, and make it look like they are actually saying those words in the video.

 

2. Generative Adversarial Networks (GANs)

GANs detect and enhance imperfections in the deepfake various times that it becomes more difficult for people to detect that it is a deepfake. To do this, GANs also depend on analysing big chunks of data. GANs consist of two neural networks, a generator that generates an impersonation, and a discriminator that detects those fakes. The generator will continuously generate fakes to improve the fake’s quality until it is good enough.

3. Deepfake Software 

Various deepfake software and deepfake apps, such as Zao, DeepFaceLab, FaceApp, and Face Swap, are available to the public and allow anyone to create realistic deepfakes. Additionally, many open-source deepfake tool options on GitHub can also be used to create deepfakes. Read more here. Most deepfake apps allow users to upload their own videos or images and manipulate them. Some of these platforms even offer deepfake voice capabilities, allowing users to create deepfake videos with manipulated speech.

However, it's important to note that deepfake technology can be used for harmful purposes, such as creating fake news, spreading misinformation, and impersonating individuals online. Therefore it's crucial to be aware of the potential consequences of using deepfake software and apps to use them responsibly.

The Potential Consequences of Deepfakes 

1. Non-consentual pornographic content 

Deepfakes have been used in various ways, but one of the most notable examples is the creation of celebrity deepfakes. These are videos that have been manipulated to show a celebrity doing or saying something that they never actually did. These deepfakes can be found on various deepfake websites and created using a deepfake app. For example, deepfake online platforms, such as DeepFaceLab and FakeApp, have made it easy for anyone to create their own deepfakes. This has led to an increase in the number of celebrity fakes being shared online. While some may find these videos amusing, it is crucial to remember that creating and sharing deepfakes without consent violates privacy and can cause harm.

In 2019, DeepTrace found that 96% of deepfake videos online consisted of pornographic content. The main victims of deepfake pornography are predominantly females in which celebrities from the entertainment industry make up 99% of these videos, while individuals from news and media make up 1%. 

The dangers of non-consensual pornographic content, particularly celebrity deepfake pornography, are numerous and severe. Firstly, it violates the individual's privacy and autonomy, as they did not consent to creating or distributing this type of content. This can cause immense emotional distress and damage their professional and personal reputations. Secondly, the existence of non-consensual celebrity deepfake pornography perpetuates harmful stereotypes and attitudes towards women and normalizes their objectification and exploitation of them. This is particularly concerning as deepfake technology is becoming more advanced and accessible, making it easy to create these videos. Furthermore, with the increasing ease of access to deepfake technology, individuals must be more vigilant and critical of online content. It's essential for people to be aware of the dangers of non-consensual pornographic content and to be able to identify deepfake videos and images to avoid spreading them.

 

2. Company infiltration 

An infographic on the targeted industries for deepfake attacks
An infographic on the targeted industries for deepfake attacks

Deepfake attacks are on the rise and are increasingly being used in cyberattacks. A report by VMWare found that there was a 13% increase in deepfake attacks last year, with 66% of cybersecurity professionals stating that they have witnessed them over the past year. Many of these attacks are conducted through email (78%), which is linked to the increase in Business Email Compromise (BEC) attacks. This is a method that attackers use to gain access to one’s company email and pose as the owner of that account to infiltrate the company, user or partners. According to the FBI, BEC cost companies $43.3 billion dollars in just five years from 2016 to 2021. Platforms such as third-party meetings (31%) and business cooperation software (27%) are being increasingly used for BEC, with the IT industry being the primary target for deepfake attacks (47%), followed by finance (22%) and telecommunication (13%). Therefore, it is crucial for companies to stay vigilant and have robust deepfake detection technology and security measures in place to protect themselves and their organisations against these kinds of attacks.

 

3. Identity theft through deepfakes

Deepfake empowers fraud with losses as high as $35 million dollars. An example is a deepfaked Elon Musk cryptocurrency scam which cost users $2 million in just six months. However, this is not the only cryptocurrency scam out there. Recently, a video was published showing a deepfake of FTX’s founder, Bankman-Fried, offering users an “opportunity” to double their cryptocurrency. The Federal Trade Commission has received over 7,000 complaints from users who in total, lost over $80 million dollars over cryptocurrency scams. This is one of the reasons why financial institutions should monitor deepfake activity especially if it’s on their identity verification systems. When using a false identity to make, for example, a bank or cryptocurrency account, the fraudster is able to make transactions under the name of someone else,  

 

4. Financial and reputational consequences

As shown above, there are consequences to deepfake scams which include financial loss. According to VentureBeat, deepfake fraud has caused a loss ranging between $243,000 and $35 million. These scams can be conducted through voice-generated artificial intelligence tools that imitate people’s voices. This was how scammers were able to imitate Elon Musk and Bankman-Fried to conduct cryptocurrency scams. Successful scams like these that had financial consequences could lead consumers to have a bad perception of those organisations.

How to spot a deepfake?

In a digital world where deepfake scams are increasingly on the rise, it is important that people know how to spot a deepfake. 

An infographic on how to spot a deepfake
An infographic on how to spot a deepfake

Here are some signs that an image or video you are looking at is a deepfake:

  • Blurring only around the face, and not anywhere else (vice-versa)
  • Uneven facial skin tone around the face
  • Face gets blurry when something blocks the face (i.e., hand) 
  • Unusual blinking or motion
  • Bad lip-synchronization
  • Inconsistent lighting and background 

However, the most effective way to determine whether an image or video is a deepfake is to use a deepfake detection tool.

Deepfake Detection Tool

Overall, our deepfake detection tool is a reliable and efficient solution to identify and protect oneself against the potential harm of deepfakes. It is available to use for individuals, organizations, and businesses looking to verify the authenticity of images and videos.

1. DeepDetector

Our deepfake detection tool, DeepDetector, provides a 93% accuracy that determines whether or not your image or video is a deepfake. DeepDetector can identify AI-generated or manipulated faces as it has seen hundreds and thousands of authentic and deepfake images. It has trained itself from these images to differentiate photos made with a real camera or computer.

Furthermore, DeepDetector is constantly updated with the latest deepfake detection techniques and algorithms to ensure that it remains effective in identifying deepfakes. It also has a user-friendly interface which allows you to upload and process your images or videos efficiently.

2. DeepfakeProof

If you’re looking to be on a constant lookout for deepfakes while browsing the web, then DeepfakeProof is for you.

DeepfakeProof is a free deepfake detection browser plug-in that can be used by journalists, open-source intelligence experts, and general users to protect themselves against deepfakes while browsing the web. It detects deepfakes online automatically by scanning every web page the user visits and warns the user if any deepfake technology generated or manipulated media is found.

Designed to provide automatic and explainable deepfake detection, DeepfakeProof helps with deepfake content monitoring and identifying disinformation mechanisms. This allows journalists and open-source intelligence experts to quickly identify and verify the authenticity of images and videos they come across in their research, and for general users to be more aware of the presence of deepfakes online.

3. Replicant

Are you a biometric and identification system looking to test your ability to detect deepfakes?

If so, our deepfake generation tool, Replicant, can help you. Replicant is a user-friendly tool that allows you to create custom-generated deepfakes suitable for testing easily. With Replicant, you can make a static image of the target move and mimic any facial movement. The tool also provides fast output, making it efficient for pen testing purposes. Using Replicant, you can determine whether your system can spot deepfakes and identify potential vulnerabilities.

Ensure that your biometric and identification system is up-to-date with the latest deepfake detection techniques by testing it with Replicant.

Questions? Lets talk

Schedule a call with us. We're always happy to help!
Calendar and clock image