November 1, 2022

How deepfakes-as-a-weapon changes warfare

On March 16th 2022, a video of Zelensky, the president of Ukraine was broadcasted from Ukraine 24. Zelensky asked his soldiers to stop fighting. Many people immediately noticed something was off. Zelensky did not seem himself. He spoke in an odd way, even his accent was different.

Many people quickly realised this was a deepfake and not a very good deepfake at that. This was the first time we see a deepfake being used in such a direct way for misinformation and in a kinetic war. However, this deepfake was almost immediately spotted by most people. What does this mean?

In truth, we do not know who made the deepfake. However, the most logical suspect is the Russian military. However, if this is true then this would raise some questions as to why the deepfake is of mediocre quality. Some of the best open source available deepfake software is made by Russians. They have had experience with misinformation and image manipulation for many years. During the USSR they already manipulated images at state level to hide and remove people from photographs as if they never existed.

They could do better

Knowing that even open source deepfake technology is at a level where it could do better,and this open source technology is developed by Russians. Then what prevented the Russian military from using the technologymore effectively? Could it be that:

  • It was a rushed job?
  • And they had no more time to fine tune the video?
  • It was not made by the Russians and meant to be detected?
  • A good way to prepare people against the deepfakes is to create a compelling event but also making sure it will be detected. So that people can start talking about it and deeply rout the possibility of deepfakes in their mind. There are some companies who do the exact same with fake news, like Tilt
  • Something else?

These are some possibilities. However, we can create many theories on how it’s possible that Ukraine itself created a bad deepfake to prepare and teach the country for a real deepfake attack, but we have to be careful not to get to conspiratorial. The most important thing we can take away from this, is that it did and will happen again,and that each deepfake attack will become more sophisticated and difficult to spot. It might already have happened again, but perhaps we just did not spot it yet.

What can we expect for the future?
Deepfakes are continuously becoming more realistic and require less data. In June this year the Dutch Police created a deepfake using only 2 pictures: 

  • Of a murder victim in order to get witnesses to speak out. (Can you find 2 pictures of yourself on the internet?)
  • And as the technology becomes more accessible, more people and companies will start using it. There are already companies in, for example, Israel who create fake personas to spy or extract information out of specific targets. 

The way fraud happens is going to change. Fraud is nowadays, already more common than theft. Did you ever receive a fake phone call from the police stating that you are accused of a federal crime, press one for more information? Did you receive fake emails pretending to be people they are not and happen to be stuck abroad and are in need of money?These are very common examples of scams. 

Deepfakes in its core, isa way to mimic the likeness of a person. Scammers are learning how to use this to create even more complex scamming methods. Currently, they are only targeting banks and large companies or rich people. When banks increase their security and spoofing them isn’t as easy as it was, they will aim their fakes at the next easy prey they can find. Probably kids and the elderly as they are doing now with e-mails and fake phone calls. Soon they will not Email you with pretending to be a relative in need. They will call you. You will hear their voice or even see their face. As they are pretending to be stuck abroad and in need of money. Will you be able to detect that something is off then?

What can we do then?

There are possible answers and solutions to mitigate the damage of deepfakes. First of all we need awareness that this technology exists. As in the case of Zelensky, awareness, laws and regulations, and the right tooling to uphold these two. 

DuckDuckGoose is developing the tools needed to spot deepfakes, but these tools must be integrated in the correct systems. There is no point in telling someone afterwards that something was fake. You need to be there the moment it happens, the moment people are looking at it, they must already know it is fake. This can only happen if the deepfake detection is done before the content is uploaded or broadcasted. However, if you don’t want to sit around and wait for these systems to be put in place, then you yourself can install such a system in your web browser!We have created a browser plugin that will scan any picture you browse by. This way, you can help us find deepfakes while protecting yourself against targeted misinformation.

Questions? Lets talk

Schedule a call with us. We're always happy to help!
Calendar and clock image