Deepfake: A Curse Or An Opportunity?
- BSLB
- Apr 19, 2024
- 5 min read

The development of intelligent machinery has received a great deal of attention over the last 50 years. This is because the development of such machinery is both beneficial to humankind and becoming more and more necessary for future advancement. It has the potential to completely change our way of life.However, given the numerous reasonable doubts and real concerns that have been revealed, the pursuit of AI might not be completely risk-free. As an example, deep fake technology, which has been called “one of the most worrying fruits of rapid advances in artificial intelligence,” raises serious concerns about intellectual property when it comes to generative AI (William A. Galson, January 8 2020).
The English term “deepfake” combines the words “deep learning” with “fake.” The goal of the artificial intelligence research field known as “deep learning” is to create neural network systems that can process complicated input. “Fake” is short for “false.” Thus, “deepfake” describes a system that uses artificial intelligence and neural networks to modify films, audio, or photos semi-automatically and produce new “fake” but incredibly realistic content.
They have been employed to replace stunt doubles or revive performers in several Hollywood blockbusters in recent years. There will no longer be a need for the heavy makeup that is usually given to actors to make them appear younger or older than their actual age. This at first glance is a positive use as well as a possible revolutionary resource in the world of cinema.
However, actor deepfakes are not always appreciated, though, as Bruce Willis’s story demonstrates. Bruce Willis believed he had obtained a license for the use of his digital picture in advertising productions, but later it was found out that the Hollywood star had not received any consent for the use of his image. The performer has aphasia, a disorder that prevents them from speaking clearly. Willis declared his retirement from acting at the start of 2022, but The Telegraph writes that he was expected to license the usage of his digital counterpart through a business called Deepcake. Though Willis’s representative told the BBC that “there is no agreement with Deepcake,” it’s still unclear if Deepcake ever owned the rights to Willis’s picture.
In a similar vein, actress Keira Knightley of the United States is building a defense against artificial intelligence (AI). The actress, 38, plans to secure copyright protection for her face in order to stop unapproved use of her picture by machine learning. Knightley backed the WGA and SAG-AFTRA strikes and brought attention to the film industry’s concerns about the protection of performers’ voices. She emphasized the potentially devastating nature of new technology and expressed optimism that governments would step in to manage the issue.
Another example that deserves consideration regards USA former President Donald Trump, who has been illustrated surrounded by groups of Afro-American individuals. The intention behind such a portrayal has been subject to much speculation, with many interpreting it as an incautious attempt at political strategy and tokenism, aimed at presenting himself as popular among that specific portion of the electorate
In addition, deep fake videos have become increasingly realistic with the help of VoCo technology, developed by researchers at Adobe and Princeton University: this technology enables the incorporation of an alternative audio or sequence of words into a video, resulting in the automatic adaptation of a speech to the person’s voice, since the technology can synthesize all the voice samples that the user has put into the VoCo algorithm. Hence, VoCo just makes it more difficult to draw the line between the authentic and the artificial.
Because deepfakes cannot be sufficiently or successfully addressed by current, conventional procedures, they present complex legal difficulties. The growing use of artificially intelligent systems has important consequences for copyright law. The ability of this technology to produce original content without the need for further or targeted human participation raises complicated questions about authorship and copyrightownership, including who is entitled to credit for these kinds of works. Who is the copyright owner? The character whose image is copied? The company/website who is using it? Or the creator of the deepfake Questions that domestic copyright laws are not yet able to answer.
One of the most recent instances of generative AI systems violating copyright involves a class action lawsuit that was brought by three artists: Sarah Andersen, Kelly McKernan, and Karla Ortiz. In this case, a district judge of the federal court of California rejected the compensation claims made by the three artists, filed because AI systems had been illegally utilizing their works. The U.S. Copyright Office states that copyrights must be registered to file a lawsuit for alleged infringement, which only Andersen had done. However, Andersen herself did not succeed in providing conclusive evidence to prove that her works were essentially copied and the presiding judge contested that the majority of the images created by AI are inherently generated by combining parts of copyrighted works; this implies that the use of copyrighted material in AI-generated works does not necessarily lead to a violation of copyright law, as long as the final work is noticeably different from the original material.Therefore, “substantial similarity” must be established between the AI-generated work and the copyrighted one registered in the training data set to prove infringement.
The actual issue lies in determining where to set the threshold. In today’s globalized world, we must attempt to embrace the “fourth industrial revolution” by balancing the potential and risks of technological advancement, not having to approach it with skepticism but rather with a Victorian approach.As long as the person portrayed gives permission and the modification is not misleading, the alteration of the media content itself is not necessarily illegal. As mentioned earlier, deepfakes and other forms of generative artificial intelligence should not be entirely condemned because they can foster innovation.This technology can help investigate new methods of managing and transforming digital content, as long as it is declared non-offensive. However, the careless application of these technologies can compromise fundamental legal values, including life, the well-being of the parties represented, their physical and mental integrity, privacy, and cultural heritage, including counterfeiting and copyright infringement.Several authors have proposed some technical and technological countermeasures against deep fakes, including authentication technologies, detection algorithms, verification platforms, educational initiatives, and public-private collaborations.
The recently proposed European legislation on Artificial Intelligence seeks to establish rules for reliable and safe AI use, respecting the values and fundamental rights of individuals. The regulation proposal allows for the use of technologies like deepfakes but imposes some minimum requirements and a transparency obligation on those who use them.Specifically, Article 52 of the regulation requires deepfake creators to label generated content so that it is clear to everyone that it is artificially created or manipulated content. However, the same article provides that this obligation does not apply when the use is authorized for legal purposes such as the ascertainment, prevention, investigation, or prosecution of crimes, or when necessary for the exercise of the right to freedom of expression and freedom of the arts and sciences, guaranteed by the Charter of Fundamental Rights of the EU, provided that adequate safeguards are in place for the rights and freedoms of third parties. This exemption clause opens up a wide range of possibilities that could render the transparency obligation of practical significance.The regulatory response outlined, therefore, by the European legislature is currently considered insufficient by many to effectively address the issues related to deepfakes.
In conclusion, enhanced internet users’ awareness paired with accountability and attentiveness is the greatest defense strategy currently in place. It’s fascinating to consider that the limits of deep fake technology are actually more related to morality and ethics than technology or law. It’s quite remarkable how far technology has come, and yet it’s the ethical implications that we need to be mindful of.The hope is that meanwhile, a more adequate regulatory framework will be developed by European institutions, which play a crucial role in this context, to mitigate the potential negative impacts of the deepfake phenomenon. In the meantime, it will continue to be essential for individuals to navigate the digital environment cautiously, identifying and fending off any potential threats to their privacy and security.
CC: Piero Fioretto and Flavia Gabrielli
Comentarios