Safety at work: a trojan horse for new monitoring technologies?

Safety at work: a trojan horse for new monitoring technologies?

For many businesses today, improving the working environment increasingly means the implementation of surveillance systems – monitoring even the way workers smile. Under the pretext of health and safety at work, some companies have implemented remote and mobile body-temperature or pupil-dilation sensing systems combined with artificial intelligent software. In this photo, a fisherman is identified as he arrives at his place of work in Thailand.

(ILO-Asia Pacific)

In Stanley Kubrick’s masterful film 2001: A Space Odyssey, the supercomputer HAL 9000 (heuristically programmed algorithmic computer) uses artificial intelligence to detect emotion and suffering, and controls all of a spaceship’s systems, including its crew. The new labour monitoring practices we are seeing emerge today – with the stated aim of improving the working environment – appear just as outlandish. Take, for example, Canon’s Beijing office, which has installed smart cameras that prevent any action from being performed (such as scheduling a meeting, accessing certain rooms, etc.) unless they detect a smile. In Europe, some companies are offering their employees the chance to participate in business-related trials which involve supplying them with glasses that establish emotion indicators. One example is the Shore app, developed by the Fraunhofer Institute for Integrated Circuits IIS in Germany, and which is used in Google’s ‘smart glasses’.

These practices have also intruded in the transport sector. Digital platforms have changed things so profoundly that new groups of workers are emerging; for app drivers, for example, invoicing is handled chiefly through an electronic hiring platform (e.g. Uber or Cabify).

Meanwhile, companies like Amazon have started monitoring drivers for (un)safe driving. Recently, the online retail giant announced that its delivery fleet will be equipped with smart cameras, claiming that the move would ‘improve the safety’ of its drivers.

These cameras (already in place in half of Amazon’s total fleet in the United States) automatically record ‘events’, including slips by the delivery driver. Each time an event is recorded, the camera sends the company images so it can assess the worker. Not only does the camera register and notify events, a metallic voice also scolds the driver (‘driver distracted!’) on each occasion. If a camera registers more than five events for every 100 trips, drivers can automatically lose the bonus that many of them depend on.

These new practices are a far cry from previous uses of monitoring mechanisms like cameras, GPS and combined artificial intelligence systems for improving plant safety and security (such as theft or fire) or improving the quality of business processes and activity. Workers’ safety and health can indeed sometimes be an incentive for workplace monitoring. The European Occupational Safety and Health Framework Directive (89/391/EEC) requires businesses to ensure safety, implying an ongoing effort to improve levels of worker protection. However, where the prevention of risks at work is concerned, there are many situations where it is not possible for businesses to monitor or supervise activity on the ground by direct means. The limits of the employers’ ambitions to monitor everything are set either by collective agreements or by law.

Addressing AI in collective agreements: a mixed picture

By engaging with the challenges of new technologies at work, workers can be a driving force in guaranteeing their safe implementation and usage – particularly with the adoption of collective agreements. In the transport sector, for instance, the use of GPS tracking systems is widespread. Sometimes, allegedly to protect workers’ health and safety, some employers use the data collected through these systems for disciplinary purposes, posing a challenge for collective agreements to deal with.

One example is the agreement negotiated between Enercon Windenergy Spain and its employees (EWS), which reads as follows: “The undertaking has a GPS tracking system in all EWS vocational vehicles made available to workers. The undertaking aims to ensure its fleet is organised more efficiently with better coordination of technical teams and workers’ safety and health. The aim of installing these devices is not to monitor workers’ usual behaviour or activity. However, in accordance with legal principles, the information supplied by the GPS system may be used in the application of the undertaking’s disciplinary regime, resulting in minor, serious or very serious misconduct, having regard to the behaviour in question as verified by the data obtained from the GPS system.” Guaranteeing that businesses will not use AI technologies for disciplinary means, even if the initial use was on the grounds of safety at work, is clearly no easy task.

Another issue concerns the right to disconnect, which is a measure that enhances health and safety. However, it is particularly striking that the collective agreement in Madrid’s passenger transport sector classifies a one-off decline in normal performance as serious misconduct, whose definition includes the driver spending insufficient time on the platform.

One area that collective agreements could tackle is the combined use of different invasive technologies. For example, technology makes it possible for businesses to use video surveillance to observe the facial expressions of workers in automated fashion and detect deviations from pre-set patterns of movement. This would be an unlawful disrespect of workers’ rights and freedoms. Processing may also involve drawing up profiles and, potentially, automated decision-making. Accordingly, collective bargaining could provide that video surveillance cannot be used in combination with other technologies, such as facial recognition, because the resultant monitoring would be disproportionate under European and national recommendations.

Protection against abusive monitoring of workers

Workers could experience serious damage to their health or lose their identity as human beings as a consequence of the demands of a faster work rate that has been pre-set by smart machines. As David Graeber explains in his 2018 book Bullshit Jobs, technology has been regularly used to ensure that we work harder, not better – leading to potential health and safety threats. The use of workers’ data to incentivise or penalise them could give rise to occupational insecurity and stress.

Here, an innovative approach to strengthening employment guarantees is required in response to the digital transition, positioning workers – and their emotions – as a key elements in this transition to a new model.

Safety at work can be and already is being used as grounds for collecting and processing employees’ data, but the measures must be part of a rationale of prevention.

In other words, such practices are acceptable only if they aim at avoiding or reducing the risks present in the working environment. Another safeguard is that the measures must be subject to a proportionality test and a risk assessment prior to their adoption. Here, the risk that it will affect other fundamental rights (such as privacy and protection of personal data) is real. All the more reason to guarantee the participation of workers’ representatives at each step of the adoption process.

This article has been translated from Spanish.