Technology leading to deepfakes (short for Deep Learning and Fake) was created in 2014 by researcher Ian Goodfellow. Named GAN (Generative Adverarial Networks), it is developed based on two algorithms, one for video playback and tampering, and the other for tamper detection. Thanks to this ingenious system, two algorithms work together to always improve and thus create videos that are as close to reality as possible. They are based on databases such as image banks or videos found on the Internet. Therefore, the more you know about a person and the easier it is to find content about him, the more realistic the deepfake will be. Thus, this technology causes many problems, especially in terms of images or manipulations.
The first viral videos were published in 2017 on the community website Reddit. They multiply exponentially until, according to sources, they reach several tens of thousands of fake videos on the Internet.
However, the use of this technology is not only for harmful purposes. Indeed, the creation of “realistic” videos with the help of AI would be very useful in cinema. For example, this will bring dead characters back to life. Creating CGI video actors already exists and has proven to be very useful:
On the set fast and furious 7 after the death of Paul Walker (one of the main characters), the film’s ending was made possible by the production of synthetic images with the actor. Deepfake technology allows us to go further by providing both realistic picture and sound.
This technology, although perceived by many as revolutionary, raises a number of questions and concerns in both the public and private sectors. In fact, cases of abuse are on the rise, and in some cases they can be very serious. In addition, their control is greatly complicated due to various factors:
Primarily, ease of access to many deepfake tools/apps (Face App, ZAO, Reface, SpeakPic, DeepFaceLab, FakeApp, etc.). This ease of access makes this kind of practice accessible to everyone, including malicious people.
But also the complexity of tracking, as well as the implementation of management tools, are indeed GAN technologies (based on the principle of machine learning), being autonomous due to constant learning with two opposing forces, tracking becomes even more difficult. The controls have a late loop. For example, for malware, the control technology reacts to new deepfakes, so the creators always have some advantage.
The risks associated with this new technology are numerous, and its application does create new problems that many companies and governments are paying attention to.
There is a risk manipulation and misinformation. Videos of some influential people (public or private) abducted using deepfakes can pose a serious danger. This is the case of Gabon, in 2018, due to illness, President Ali Bongo did not appear in public for several months. In December of the same year, a video appeared in which Ali Bongo reassures the population about his state of health. His political opponents immediately declared this video a deepfake. The hysteria created by this video led to the acceleration of the coup d’état aimed at overthrowing the president. (The video was not a deepfake).
The risk of data or money being stolen is also very high, in particular through deep voices (based on deepfakes that allow voice reproduction thanks to artificial intelligence). The victim of this scam was a manager of a major bank in the UAE, in fact the perpetrators reproduced the voice of one of his most important clients, which led him to authorize a transfer of up to 35 million euros, which caused great damage to his image and his business.
In this sense, identity theft and humiliation of opponents are among the main risks. This happened, for example, with Rana Ayub, an Indian journalist who defends women’s rights. During defamation campaign against her, she starred in several pornographic films filmed with her face. This practice is becoming more common and mainly affects women.
The ease of creating deepfakes, as well as their exponential growth, has alarmed the private and public sectors, which are now getting away with it. protect. This is especially true of Google, which in 2019 released a database of over 3,000 deepfakes mentioned. The company launched, in parallel with the support of other companies in the digital sector, contests aimed at identifying these fakes. Some companies go even further and set up AI to “de-identify”. This is the case of Facebook, which, with its FAIR research lab, is working on this filter that can be applied to videos hosted on the platform.
States, for their part, are trying to legislate to provide a regulatory framework to limit the spread of deepfakes. October 22, 2018 A. anti-manipulation (“fake news”) legislation is part of the French legal arsenal. This limits the spread of false information on the Internet. However, this has several limitations, such as the freedom of expression of individuals, the neutrality of the main platforms (Facebook, Twitter, etc.), as well as the right to anonymity of users. All these difficulties complicate the implementation of control levers.
The delay in the development of these control tools is a concern. Indeed, the main fear is to see the creation of a video without any root source. Today, some professionals can go back to the roots, allowing you to create these fake videos. However, in the near future, experts agree that since the technology is constantly improving, the risk of videos without root source is very high and will make any control very difficult.
” We are not very far from generating completely artificial content from a text description. “, concludes Laurent Amsalegh, director of research and head of the LinkMedia team.
Pierre Paran for AEGE Risk Club
For further :