Blog
09.11.2022

The false promise of transparent deep fakes: How transparency obligations in the draft AI Act fail to deal with the threat of disinformation and image-based sexual abuse

Creating hyper-realistic impersonations of people is easier than ever before and with the new technology comes new regulatory challenges.

Deep fakes are the latest trend in digital impersonations. The technology uses artificial intelligence and machine learning to swap and/or manipulate faces and bodies in pictures, videos, audio, etc. It creates hyper-realistic impersonations, making individuals appear in places they have never been, doing things they have never done, saying things they have never said.

The deep fake phenomenon emerged around 2018 when a Reddit user released “FakeApp” which enabled users to easily create deep fakes for free. FakeApp no longer exists, however similar software has since then surfaced and amassed big user bases. These are all easy-to-use software that require little to no technical background. Even though deep fakes have many positive uses, easy access to this technology has created a massive surge in non-consensual pornographic videos. A now-banned subreddit with nearly 100 000 members was dedicated to creating celebrity pornographic deep fakes. With the wide-reaching availability of deep fake software, it has become possible for anyone to use it with ease and create non-consensual pornographic videos of regular people. An analysis conducted in 2019 by the cybersecurity company Deeptrace found that 96% of all deep fakes online are pornographic and disproportionately target women. In light of these growing concerns, academics and civil society actors alike are calling for effective regulation.

Evidently, deep fakes have also been on the European Union’s agenda to address. In a 2021 study conducted at the request of the European Parliament Research Services, the exacerbating effects deep fakes have on “image-based sexual abuse” was recognised. The study discussed the risks of deep fakes on the individual, organisational, and societal levels. It especially highlighted the potentially destabilising political impact of deep fakes along with the risks and harms of pornographic deep fakes and discussed the current regulatory landscape and options.

Followingly, deep fakes were included in the draft AI Act of the European Union. The regulation aims to lay down harmonised rules for AI systems, which are categorised by risk. Under the draft AI Act, the limited risk group, which deep fakes fall under, is subject to a blanket transparency obligation. This transparency obligation entails a specific disclosure obligation of deep fake content. To comply, users of deep fake software shall disclose that the content has been artificially generated or manipulated. The limited risk group is governed through codes of conduct, and this raises questions on the enforcement of said transparency and disclosure obligation.

Under the transparency obligation, some deep fakes are exempt from disclosure due to freedom of expression. The exemption is likely to bring inconsistencies in practice. The burden is on the user to decide whether to disclose a deep fake. Furthermore, the definition of the “user” in the draft AI Act excludes non-professional activities, which most of the malicious deep fakes fall under. Due to this limited definition of the “user”, said disclosure obligation will be applicable to only a small portion of deep fakes and create an unnecessary burden on the legitimate users of this technology.

The risk this technology may have on democratic discourse is apparent as this tool is heightening the already existing “fake news” or “disinformation” problem. Government and state officials, including European Parliament members, have already been targeted. Thus, the desire of the AI Act to address disinformation through transparency and disclosure is understandable. Some scholars argue that with deep fakes, the risk arises when people think what they are seeing is real. Thus, transparency obligations might have positive impacts, especially in the context of disinformation and threats against democratic discourse.

However, the enforcement issues with the transparency obligation under the draft AI Act overshadow the positive impact it might have. In addition to these already mentioned enforcement issues, there is a debate to be had about the suitability of “transparency” in the first place to mitigate the risks of this technology. Insights from cognitive and behavioural science hint at shortcomings transparency might have in this regard. Repeated exposure to false information increases the chances of it being remembered as true. This effect is still at play, even when the false information is presented with a fact-check warning.

Lastly, it should be underlined that the risks and harms of deep fakes are deeply contextual. In the context of image-based sexual abuse, transparency obligations are not helpful as the risk is the violation of sexual self-determination. Harmful effects occur even when pornographic media is labelled as a deep fake. As follows, transparency and disclosure obligations under the draft AI Act do not address the most pressing harms of this technology.

Deep fakes bring challenges in defining victims, finding perpetrators, providing justice, and holding internet platforms accountable. These challenges are further muddled by the unclear application of said transparency obligation. Blanket transparency obligations for deep fakes are not appropriate to fully address the risks they present. Regulation needs to be tailored to the use of deep fakes and needs support from further platform and civil society efforts.


Further Reading:

  1. Veale, M., & Borgesius, F. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97–112. https://doi.org/10.9785/cri-2021-220402
  2. Chesney, B. & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy. California Law Review, 107(6), 1753-1820. https://doi.org/10.15779/Z38RV0D15J
  3. Citron, D. (2019). Sexual Privacy. Yale Law Journal 128, 1872–1960.
  4. McGlynn, C., Rackley, E. & Houghton, R. (2017). Beyond ‘Revenge Porn’: The Continuum of Image-Based Sexual Abuse. Feminist Legal Studies 25(1), 25-46. https://doi.org/10.1007/s10691-017-9343-2

 

Teaser photo by Jesús Rocha on Unsplash