“The purpose of these experiments is to accustom the population to the presence of these surveillance technologies,” Olivier Tesquet.

On May 11, a working group of the Senate Judiciary Committee submitted a report calling for experimentation. face recognition in France for a period of three years. Under the pretext of wanting to regulate the use of this technology, three speakers Arnaud de Belene (LREM), Jérôme Dürin (PS) and Marc-Philippe Dobress (LR) propose about thirty measures for “ eliminate the risk of surveillance of the company“, while opening the door for experimentation in the public space.

Senators want in their first sentence to achieve ” national survey aimed at evaluating the perception of biometric recognition by the French ” as well as “identify sources of better acceptability of this technology”. A proposal that raises concerns about the phenomenon of the population getting used to face recognition. Although this phenomenon remains little known to the general public, this technology has been used by the French police for several years. The association “La Quadrature du Net” notes that in 2021 the police ” performed 1600 facial recognition operations per day, outside of any legal framework » of the 8 million faces of the TAJ (Conviction Processing) file. The association also launched on Tuesday, May 24th. a collective complaint against the Ministry of the Interior and the French state about the techno-police and surveillance of public space.

Olivier Tesquet, Télérama journalist specializing in digital issues and public freedoms, author of the book State of emergency answered questions from Luc Offre for QG about the risks of such technology.

Olivier Tesquet, Télérama journalist and author of the book Technological Emergency published by Premier Parallèle.

QG: What are the dangers of facial recognition today in France?

Olivier Teske: The risk, and it exists today, is that this technology is used without sufficient legal and judicial safeguards. The general public does not know this, but facial recognition is already being used in France, in the Processing of Criminal Records (TAJ), which has 8 million photos and has been the subject of more than 600,000 requests since 2019. or in a less traditional way and on vague legal grounds, this technology is being used during police checks. We then find ourselves in a situation that we have already experienced with other technologies, such as Global Security Law drones censored by the Constitutional Council and then recast in another text: use precedes law, whose sole mission is nothing more than to legalize what is illegal .

In this case, there is an extremely significant risk of violating fundamental freedoms, which is emphasized in the report. The most spectacular and dangerous application of this technology is its real-time use in public space for police purposes. You should not walk down the street, subjected to constant and general verification of documents.

A police officer equipped with a body camera, which is used, among other things, to capture and record the faces of demonstrators.

QG: The speakers wanted to push for red lines… Are they enough or is there a real risk of opening the door for a surveillance company?

The risk is always the same: a red line is drawn on the ground, but it comes with exceptions, in particular regarding safety. By setting these exceptions, we always take the risk of simplification and a ratchet effect that will make it very difficult to go back once this technology starts to roll out. However, the report recommends real bans, such as emotion detection inherited from the racist pseudosciences of the 19th century, or even the much-talked about social credit in China. But here, too, this does not mean that a person is completely immune to this type of control, which can materialize in a more insidious way. For example, in France social organizations massively analyze data to identify beneficiaries at risk, leading to algorithmic and discretionary evaluation of individuals.

QG: Is there any risk of public acceptance of this technology if such a law is passed?

I am always very wary of these so-called experimental schemes, because very often, for the same reasons as stated above, they doom them to inevitability. Interesting: Is it about trying out the technology to see if it improves society, or just getting people used to its presence? When I hear some manufacturers say that this or that technology “can’t invent”, does this mean that we should definitely use it? It takes a lot of moral strength to say no. It is difficult when the state of emergency, which has become the regime of government since 2015, helped to make the exceptional ordinary, and the temporary permanent. I also don’t forget the timing of major sporting events – the Rugby World Cup in 2023, the Olympics in 2024 – which are always tense moments of the normalization of these instruments.

In general, I have a feeling that we are not discussing technology, but only methods of using it. However, in recent years we have been able to see that major American cities, like San Franciscodecided to ban facial recognition. This shows that only political will is needed.

QG: The third point of the report is the ban on real-time remote biometric surveillance during demonstrations on public roads. But is there any risk that the captured images will be analyzed by facial recognition software??

In theory, police use of CCTV images is controlled by the judiciary. However, very often we find that they are used wildly. We return to the age-old question of control: who gets access to the data and how? Without strong guarantees, we may be wary of the same overflows in biometric image processing.

Vigilance is all the more necessary as law enforcement officials would like to integrate facial recognition into other police files, such as Wanted Person Files (RPF) and those related to attacks on public safety (CRISTINA, GIPASP), which are regularly criticized because that they allow following political, trade union or religious views.

In this context, with all these parameters, and despite the cautious recommendations of the Senate, it is not necessary to be very creative in dystopia to imagine a future in which very detailed surveillance of social movements will be deployed, relying on this technology. A researcher like Vanessa Kodaconi has demonstrated this very well. : repressive politics born of the fight against terrorism has evolved into customary law, and today uses the same legal tools to monitor environmental activists or yellow vests as would-be terrorists.

In many ways, this, unfortunately, is a return to the roots, coupled with a change in scale. Historically, since the 19th century, the only categories of the population that had to prove their identity were those who were considered dangerous: the poor, foreigners, repeat offenders. Today, not only new ones have appeared, but tomorrow anyone can find themselves in the networks of endemic surveillance.

Luke Offret