The past few years have only increased the importance of technological tools in citizen-institution relations: online taxes, anti-Covid-19 filings, participatory budgeting, primaries voting. Some of these developments come from government logic producing code and open data such as Code.gouv.fr, Data.gouv.fr and everything produced by Etalab in particular. Others are the main markets for run civil technology, often responding to the logic of cutting government spending through automation and economies of scale.
Despite these developments, we have not yet received enough public discussion about the responsibility of these tools and the people who make decisions about their use. In their current form, these technologies suffer from a serious lack of transparency that spans several aspects, both technical and administrative. Apart from rare exceptions and domestic government initiatives, a significant proportion of projects are entrusted to service providers (often through tenders). The latter, as a rule, are not required to create open source even for sensitive applications. Thus, customers are left with technical “black boxes” that are costly to maintain and can only be handled by the original service provider, eliminating the market. The challenge here is not to oppose profitable logic (technological development costs money, whether free software or not), but to oppose the lack of social control and the responsibility that comes with it. So let’s take two important examples to illustrate the problems that this opacity can cause.
Let’s first look at the question of online polls, which are now used not only for certain primary elections, but are also extremely widespread in voting in companies and government agencies, for example, for their professional elections or consultations with citizens. Despite significant technological advances over the past twenty years, Internet voting remains a major challenge for academic experts in the field. The problem of coercion, in particular, seems almost intractable when one votes at home, and the identification of the person who votes is also problematic. However, if we ignore these two vulnerabilities for a moment, we have a great advantage: we can now create verifiable systems, that is, in which any voter can be sure that his vote is counted. This verifiability is based on complex cryptographic protocols that are easy to use on a computer, but almost impossible to organize in terms of logistics in elections using paper ballots. However, this requires that the voting protocol be well defined and publicly available, as well as the corresponding computer code.
The availability of public source code is increasingly becoming part of the practice of large IT companies, since it is, first of all, a guarantee of security and reliability.
These principles a priori is inseparable from verifiable voting: if a voter does not know what operations the server performs when managing his vote, he has no way to check whether he performs them correctly. Therefore, it seems logical that all voting systems used follow this logic of technical transparency. This already applies to certain software such as Helios or Belenios developed at a French university or ElectionGuard produced by Microsoft. However, tenders rarely adapt to solutions developed by universities that are at the forefront of technology but rarely competitive due to lack of marketing. Thus, like the voting machines used in the US (the security of which is constantly in question because their technical details are kept secret to protect the intellectual property of several companies), many of the systems used in France do not respect these principles of transparency. . Some even promise simulated verifiability, the technical value of which is unverifiable because the voter does not know exactly what they are verifying or how to proceed if fraud is suspected. Note that, as stated by Microsoft mentioned above, having public source code is increasingly becoming part of the practice of large IT companies, as it is primarily a guarantee of security and reliability. Hiding your code can certainly prevent certain attacks, but above all, it allows a company that has been the victim of an attack to hide it without alerting users, thus risking the attack affecting the entire ecosystem.
Ensuring public confidence on these issues requires transparency not only at the technical level, but also in the very choice of service providers (transparency of criteria, methods for making decisions on bidding) and the methods used. For example, we can fully justify the recent choice to use majority decision as the method of voting in popular primary elections, although it has recently been criticized by activists for being unfavorable to certain candidates (including Jean-Luc Mélenchon and Yannick Jadot). In fact, this argument is flawed because no counting method can be neutral: every opportunity benefits certain candidates, and Kenneth Arrow’s mathematical work in 1951 proved that the problem was unsolvable. Thus, only the transparency of the decision-making process avoids accusations of bias.
It seems desirable to require service providers to create tools that the administration can use over time, gradually building up its own experience, if it deems it necessary.
To continue on a more positive note, let’s take the example of Taiwan, whose government policy since 2016 has been strongly focused on transparency. not allow the government to declare a state of emergency. Without enforcement powers, the government was forced to rely on people’s accountability and full transparency in managing the pandemic: the government was obligated to answer all questions asked by citizens online. Another noteworthy point is that contact tracing was decentralized, which prevented any accusations of political tracing. As Taiwanese Digital Minister Audrey Tan explained at a recent conference [voir la vidéo], The effectiveness of the island in managing this crisis – the death rate per inhabitant is 50 times less than in France – is associated with a high level of trust in the government. Moreover, she disproved the hypothesis that the source of this confidence was some kind of “cultural difference”: a few years ago, with a massive protest movement at the time, confidence was estimated below 10%. Thus, the ease of managing the current crisis was due to the strong state capacity associated with mutual trust between citizens and government, facilitated by transparency.
The reliance on these black boxes also runs counter to the efforts the French state is making today to reduce its reliance on consulting firms by limiting contract outsourcing to cases where the administration does not have its own specialists. Lack of certain technical skills is normal, especially in very specialized subjects. However, it seems desirable to require service providers to create tools that the administration can use over time, gradually building up its own experience, if it deems it necessary. This requirement of transparency corresponds, after all, only to the direct application of the provisions of the Digital Republic Act of 2016.
*Enka Blanchard is a mathematician and member of the Department of Spatial Intelligence at the Polytechnic University of Hauts-de-France. In 2019, she defended her thesis on the human aspects of authentication and voting systems. His research and publications are available on his website.