In the name of security, France plans to monitor crowds using AI and algorithms during the Paris Olympics

In the name of security, France plans to monitor crowds using AI and algorithms during the Paris Olympics

VIGI360 cameras provide 360° coverage of an area while recording images on a loop for a period of 72 hours. The technology can also be integrated into new intelligent video systems.

(Clément Gibon)

From the installation of 15,000 biometric cameras in Doha to surveil the behaviour of football fans during at the 2022 World Cup, to Iran’s introduction last April of cameras in public spaces to identify and punish women not wearing a hijab, to Israel’s use of facial recognition to monitor Palestinians in Hebron and East Jerusalem, governments are increasingly using mass surveillance technology to support law enforcement for security purposes.

Even countries previously considered to be more respectful of civil liberties are increasingly adopting new surveillance systems. In March 2023, for example, France opened the door to the experimental use of a controversial new technology when it voted on an article of the proposed law on the 2024 Olympic and Paralympic Games. This article authorises experimentation with algorithmic and automated video surveillance (AVS) at all sporting, recreational and cultural events with more than 300 participants until 31 March 2025. The law was promulgated on 19 May 2023 and the experimentation can thus begin.

According to the article’s wording, the images obtained will have the sole purpose of detecting “in real time, predetermined events likely to show or reveal these risks and to report them”. The article vastly increases the government’s surveillance capabilities with software that can analyse images supplied by cameras or drones. If the artificial intelligence (AI) detects certain crowd movements or behaviour that it has been trained to identify as ‘abnormal’, it is able to send alerts to the relevant authorities.

Advocates for the law, including the Law Committee’s reporter, Senator Agnès Canayer, argue that the exceptional scale of the event and the associated security risks justify this type of system. “This event will bring together a very large number of people in confined spaces and will require greater security measures to confront the many cyber and terrorist threats that remain very real in our country,” she tells Equal Times.

According to Canayer, the right balance must be struck between security risks and the restriction of civil liberties. The senator asserts that numerous safeguards have been put in place to prevent abuses, such as support for the creation of algorithms by the Commission Nationale de l’Informatique et des Libertés (National Commission on Informatics and Liberties, CNIL), an independent French administrative authority. Canayer also maintains that AVS will remain only a tool to support human decision-making.

The introduction of such technology nonetheless represents a first on the European continent. In an open letter, 38 European civil society organisations argue that the generalised use of algorithm-driven video surveillance presents a significant risk to individual freedoms and civil liberties and violates international human rights law.

Are the Olympic Games a pretext for “generalised surveillance of public space”?

Contemporary history has shown that developments in mass surveillance technologies are hard to reverse. They have a way of gradually becoming the status quo and the public is given little choice but to accept them.

The computerisation of administrative services in the 1970s brought about an initial change of scale in the collection, processing and storage of personal data. The technological developments that followed during the 1990s, including internet networking, Global Positioning System (GPS) and video surveillance cameras, gave governments and businesses alike effective tools for keeping an ever-increasing eye on the public.

According to Guilhem Giraud, former engineer at the Direction de la Surveillance du Territoire (DST, now the Direction Générale de la Sécurité Intérieure, or General Directorate for Internal Security, DGSI) and author of the book Confidences d’un Agent du Renseignement Français (Confessions of a French Intelligence Agent), the 11 September 2001 attacks in the United States were a major turning point in the standardisation and acquisition of surveillance technologies. The Patriot Act, passed almost unanimously by the American Congress, allowed American telecommunications operators to collect both ‘technical’ data, used to identify conversations, and data directly associated with the content of conversations. This led to the abuses later revealed by Edward Snowden, a former subcontractor for the Central Intelligence Agency (CIA) and the National Security Agency (NSA).

In France, the terrorist attacks of 2015 led to the introduction of a state of emergency (a legal regime granting powers in cases of exceptional circumstances), which lasted for two years. This resulted in a long series of security laws, many of which were deemed to be oppressive under international law, and many of whose provisions were gradually incorporated into ordinary law.

The reasons put forward by lawmakers who vote on laws on internal security and surveillance, and the context in which such laws are passed, must therefore be taken into account.

“The Olympic Games are nothing more than a pretext for passing legislation that would otherwise be unacceptable to the public,” says Bastien Le Querrec, a lawyer and member of La Quadrature du Net, a French association for the defence of freedoms in the digital age. “These events are a further step towards the generalised surveillance of public space.”

This has already been seen in Japan, which authorised surveillance of accredited personnel via automated facial recognition for the 2020 Olympic Games (ultimately held in the summer of 2021). The 2019 African Cup of Nations held in Egypt was also an opportunity for the authoritarian regime of Field Marshal Abdel Fattah el-Sisi to deploy drones equipped with facial recognition cameras to monitor politically conscious football fans.

Changes of nature and scale

Just as computerisation marked a turning point, the integration of AI into video surveillance represents a new transition in the mass surveillance of civilian populations.

According to Le Querrec, there has been a real change in the nature and scale of surveillance systems: “The use of algorithms in video surveillance means that the behaviour of everyone filmed in a public space will constantly be analysed.” According to Quadrature du Net, AVS is capable of identifying certain types of silhouettes, physical attributes, gaits and behaviour in individuals. While many associations consider this data to be too sensitive, the law’s defenders assert that it is non-biometric, as it does not identify individuals by name. However, while biometric and facial recognition has been excluded from the 2024 Olympic Games law, another legislative project has already been approved by the Senate on 12 June which could pave the way for possible use in cases of serious threats.

Nevertheless, from a sociological point of view, the implications of permanent expanded surveillance by governments are significant. Research carried out in Zimbabwe has shown that it leads to a smoothing out of human behaviour and interaction, which can discourage the legitimate exercise of civil liberties and individual freedoms. The Council of Europe has also denounced this “chilling effect,” specifically through the use of Pegasus spyware by certain EU members.

According to Le Querrec, the introduction of algorithms into video surveillance is a political choice which ultimately depoliticises decision-making by local authorities. In France, for example, the Régie Autonome des Transports Parisiens (Autonomous Parisian Transport Authority, RATP) carried out tests using this type of technology and concluded that it was not yet able to effectively identify certain types of suspicious behaviour indicating intent to commit a crime and instead resulted in false positives. Equipment manufacturers, however, continue to claim that their products can accurately identify such behaviour.

“We know full well that the authorities seek to identify people who panhandle. Authorising the detection of loitering as intent to commit a crime means depoliticising the way in which poor people are hunted down,” says Le Querrec.

The use of automated surveillance technology is all the more controversial as so few studies on its effectiveness exist. On the contrary, numerous reports by human rights organisations like Amnesty International have shown how government facial recognition has a disproportionate impact on racialised people, as has been the case in New York. They run a greater risk of being misidentified due to incomplete reference data and are therefore subject to more unjustified arrests.

“To date, governments have not demonstrated that they are able to use these technologies without infringing on human rights, nor that the same objectives cannot be achieved through less intrusive means. The proportionality of these measures relative to their risks is a real issue,” says Mher Hakobyan, Amnesty International’s advocacy adviser on AI regulation.

A booming market and the beginnings of regulation?

Despite the many risks associated with algorithm-based video surveillance, the market for this type of technology is booming. Encouraged by security multinationals and specialist lobbies, many governments are introducing such systems without a solid legal basis. In its latest report, the association European Digital Rights (EDRi) identifies a “shocking” increase in the illegal use of biometric mass surveillance in Germany, the Netherlands and Poland, particularly following the measures put in place during the 2020-2022 health crisis.

In view of this craze for new mass surveillance technologies, Giraud warns governments against ‘techno-solutionism’. “When you engage in mass surveillance, you drown out useful information in a sea of data and the algorithms become overwhelmed. Authorities would do better to invest in targeted surveillance technologies, which will be much more effective,” says Giraud.

With the European Union currently drafting ground-breaking legislation to regulate the harmful uses of artificial intelligence through the Artificial Intelligence Act, Hakobyan reminds us of the importance of mobilisation and citizen action to counter the introduction of such technology.

“With the AI Act, the EU will have an impact on the rest of the world. People often forget the power they have to influence the final wording of a law,” Hakobyan tells Equal Times. “For example, directly addressing national representatives is something citizens can do to change legislation.”

This article has been translated from French by Brandon Johnson