In the name of controlling its borders, the European Union is investing in artificial intelligence. The latest example to date: Itflows, software for forecasting migratory movements. The Disclose investigation site displays internal warnings about potential abuse in its app. Experts on the subject, interviewed by Infomigrants, are concerned about the growing prominence given to these “high-risk” human rights technologies.
Five million euros of European public money was used to develop the Itflows project, an artificial intelligence (AI) tool designed to predict migratory movements. Scheduled for August 2023, this tool, developed by private company Terracom and research institutes, is still in the testing phase.
But the project is considered “troubling” by several experts, including Petra Molnar, associate director of the Refugee Law Laboratory at York University, Canada, who InfoMigrants spoke to. The lawyer and researcher, a member of the Migration Tech Observatory, which closely monitors these kinds of projects, believes Itflows “normalizes the use of high-risk technologies, such as predictive analytics software, to predict the movements of people crossing borders.”
>> To (re)read : New technologies in the service of identifying migrants who died at sea
In fact, while the tool is still in testing, the survey posted by Disclose already shows internal warnings about its potential abuse. “There is a significant risk that the information will fall into the hands of states or governments, which will use it to install new barbed wires along borders,” said Alexander Kjaerum, an analyst at the Danish Refugee Council and a member of the supervisory board. Disclose journalists.
“Stigmatize, discriminate, persecute migrants”
Itflows ethics committee members regret not heeding their warnings. In internal documents obtained by investigative reporters, this committee believes that information provided by Itflows could be used, if used “inappropriately”, to “stigmatize, discriminate, harass or intimidate people, in particular those in vulnerable situations such as migrants, refugees and asylum seekers.”
In one of these reports, the ethics committee details these abuses. Among other things: “Member States can use the data provided to create ghettos for migrants.” The committee also points to “the risk of physical identification of migrants” as well as “discrimination based on race, gender, religion, sexual orientation, disability or age.”
>> To (re)read : For migrants biometrics all the way
The use of artificial intelligence “exposes migrants to violations of their rights, including the right to privacy, the right not to be discriminated against and the right to seek asylum,” sums up Margarida Silva, researcher at the Center for Research on Multinational Enterprises (SOMO). ), contacted by the Infomigrants. “Increasingly investing in surveillance and artificial intelligence technologies, border agencies and politicians are also choosing not to invest these resources in rescue operations and the creation of safe passages,” she recalls.
Itflows testifies to the “growing appetite of the European Union (EU) for the use of unregulated and risky technologies” to protect human rights, Petra Molnar laments. These technologies also include autonomous surveillance drones or cellular data extraction software.
Frontex is interested in artificial intelligence
In its investigation, Disclose notes Frontex’s interest in Itflows. The European Border Surveillance Agency “is closely monitoring the progress of the programme. Up to the fact that he actively participates in it, providing data collected as part of his missions, ”the journalists describe.
However, several recent investigations show that Frontex covers activities that go beyond the legal framework, in particular the deportation of migrants from Greece to Turkey. These methods are “supported by various technological solutions,” warns Petra Molnar.
>> To (re)read : Frontex boss Fabrice Ledgery resigns amid scandal over illegal deportations of migrants to the Aegean
The agency does not intend to stop at Itflows. He suggests his desire to rely on other AI tools and reports on ongoing research and development. Another project that Frontex was betting on and also aroused controversy was called IborderCtrl. This tool, which looks like a lie detector, was funded by the EU in the amount of 4.5 million euros. It aimed to decipher the emotions experienced by people interviewed at the border by analyzing the micro-movements of their faces.
IborderCtrl, like Itflows, was funded under Horizon 2020. This research and development program represents “50% of the total public funding for security research in the EU,” the specialized website Technopolice clarifies.
New technologies need to be regulated
Apart from the work of specialized researchers and investigative journalism, EU development of technologies such as Itflows is often carried out in an environment of great opacity. Many NGOs are calling for more transparency.
“We need stronger laws and policies that reliably protect the international right to migrate and seek asylum,” insists Petra Molnar. One of the first steps, she said, will be to “engage affected communities in the debate” around these technologies.
The EU is discussing a plan to regulate the use of artificial intelligence. This is the AI Law. MEPs must amend the original text at least by the end of the year. Petra Molnar, who along with other civil society experts has proposed a number of amendments, hopes that these negotiations will lead to a “complete ban on predictive analytics technologies” such as Itflows before they are implemented.