Due to the rise of deepfakes, the question 'What is real?' is increasingly on the political and social agenda. Some malicious uses of deepfakes have already been banned or have been considered in legal frameworks and guidelines. However, the enforcement of these legal rules is extensive and complex, and often enforcement is only discussed when the damage has already been done (Van der Sloot, Wagensveld & Koops, 2022). Think, for example, of the Digital Services Act that could include opportunities for regulation against ‘deepfake threats.’ It is however unlikely that hard obligations on platforms as part of the Digital Services Act could impose rules on the removal of such content. Experts state that we should focus on how content is distributed and shown to people rather than push for removal in order to address disinformation and harmful media.
Recently, deepfakes were used to scam CEOs, to impersonate a U.S. Admiral on Skype chat that swindled nearly $300,000 out of a widow, and to commit tax fraud by a Chinese criminal group. It is therefore not surprising that Europe is arguing for more regulation of deepfakes to prevent abuse. Regulation of deepfakes is, for example, included in the European Commission's proposal (EUR-Lex, 2021) for upcoming regulations for Artificial Intelligence (AI). The entry into force of the AI regulations will probably take a while. Therefore, at the moment, the current legal framework should be considered.
Dutch criminal law is generally well equipped to deal with the criminal usage of deepfakes, i.e., fake content that is being used for libel, slander, and hate speech. Also, committing fraud using a deepfaked voice of another person is punishable. In fact, producing a voice clone of someone's voice is not a criminal offense in itself, but when this recording is used as an instrument (or is attempted) to extort money from others, then it is.
The General Data Protection Regulation (GDPR) also has various regulations to avoid creating and distributing deepfakes. Think of the obligation to inform the people portrayed that they are part of a deepfake. Some state that the question is whether the GDPR should not be read in such a way that deepfakes are prohibited by definition. However, images and videos of non-existent people generated by using deepfake technology are not considered in the GDPR while also those fake images can be used to commit fraud, e.g. synthetic identity theft. Synthetic identity theft is a type of fraud in which a criminal combines real and fake information to create a new identity. The real information used in this fraud is usually stolen. This information is used to open fraudulent accounts and make fraudulent purchases.
Also, European laws can be considered when looking at deepfakes of an unlawful nature. Victims of deepfake technology can invoke two human rights according to the European Convention on Human Rights (ECHR): Article 8 ECHR (right to privacy) and Article 10 ECHR (right to freedom of expression). Furthermore, on the European level, the most relevant policy trajectories and regulatory frameworks related to negative applications of deepfakes are:
Even though the current rules and regulations offer at least some guidance for mitigating the potential negative impacts of deepfakes, the legal route for victims remains challenging. Typically, different actors are involved in the life cycle of a deepfake. These actors might have competing rights and obligations. Besides, enforcement of current laws is especially made more difficult because an enormous amount of manipulated material is already appearing. Experts expect that this will only increase and become increasingly difficult to detect.
While deepfakes are forcing society to adopt a higher level of distrust towards all audio-graphic information, regulatory institutions need to adopt policies to mitigate deepfake-related risks. According to research by European Parliamentary Research Service (2021), policy measures can be introduced distinguishing five dimensions regarding:
Risks and harms associated with deepfakes will stimulate lawmakers to propose legislation. Lawmakers should be aware that although deepfake technology is relatively new, it is not raising a host of unique challenges. Attempts to address deepfake political interference and manipulated content should consider not only potential (unintended) consequences, such as the entrenchment of market incumbents but also the effect such proposals might have. Legislation that targets deepfakes should be narrowly focused on a small category of content that seeks to inflict specific harms.
To conclude, researchers state that legal and organizational solutions can be found for all kinds of negative impacts caused by deepfakes. However, any solution may raise new regulatory questions. A broad political and social discussion is therefore necessary before new rules are introduced. Active participation of different stakeholders such as lawmakers, private-sector businesses, and citizens is necessary since these groups might be victims of the malicious usage of deepfake technology.