Sep 20

Real-time remote biometric recognition: where European and Chinese regulatory approaches coincide?

Biometric recognition is a real topical issue in the regulatory agenda around the globe. Technologies of this type are deemed to have enormous potential to cope with crimes, particularly terrorist activity. These technologies make security measures more robust and convenient in use. They are also helpful for different services such as payment. It is a real temptation to get rid of bank cards, passes, ID etc., and only use our biometrics which are always with us. 

On the other hand, wide use of such technologies poses enormous risks for individuals and the whole society. First, biometrics reflect unique and eternal human particularities, and once compromised could not be replaced like a national ID or a driving licence. Aggregation of huge biometric databases creates a potential vulnerability, and drastically increases the sensitivity and devastating effect of leakage or hacking. Second, biometric surveillance affecting not only relatively limited targeted groups of people (for instance, missing children, public servants, or suspected criminals) erodes the social structure and deform people’s behaviour. Remote biometric recognition technologies are capable of monitoring people everywhere in real time undermining the essence of privacy. The erasure of privacy, in turn, diminishes other fundamental rights and freedoms such as freedom of speech, freedom of association, etc. People have to change their behaviour, they cannot trust anyone, and cannot speak openly even in the inner circle. The effect of ‘Big Brother Watch’ is already a reality in some countries, but it may become much worse with the development of biometric recognition. Biometric surveillance is a powerful tool to manipulate human behaviour and completely deprive of any free choice. Actors controlling such technology (not always and not necessarily states) might get a superpower over society, eradicate democracy and free elections, or turn them into a sham.                       

Every advanced technology is normally double-edged and may be used for both benefit and harm. It is up to the law to regulate the ways of using the technology and to define the ends and modality of such use. 

Law is the social institution capable of endorsing useful and rule out harmful applications of biometric recognition. That is exactly what the states are currently trying to do. And the main difference between the approaches they adopt is which ends they consider socially useful: preservation and facilitation of free society, democracy, and human rights; or putting the biometric recognition technologies under the state (or ruling group) control. 

In that sense, the two leading global approaches may be indicated:

  • European (or human-centric);
  • Chinese (or state-centric).  

European approach

The European approach to regulating the data and digital area is currently predominant and most influential in the world. The vast majority of new data protection laws across the globe are encouraged by the GDPR and other European laws. European legal tradition defines the mainstream of the future legal basis of the global digital economy.      

European policy-makers draw up a distinction between ‘real-time’ (‘identification occur all instantaneously, near-instantaneously or in any event without a significant delay’) and ‘post’ (‘the comparison and identification occur only after a significant delay’) remote biometric identification systems. 

The main concern about biometric recognition is connected to ‘remoted’, ‘real-time’, or in other words, ‘live’ recognition, particularly in ‘publicly accessible place’.

These concepts are to be defined by the draft of the Regulation on a European Approach for Artificial Intelligence (Artificial Intelligence Act, further – AIA), unveiled by the European Commission on 21 April 2021 and adopted by the European Parliament on 14 June 2023. 

The recital 8 of AIA explains that ‘notion of remote biometric identification system … should be defined functionally… irrespectively of the particular technology, processes or types of biometric data used’. 

According to the rec. 8 and article 3 containing legal definitions AI system should satisfy three cumulative criteria to fall within the scope of ‘remote biometric identification system’: 

1) intended for the identification of natural persons at a distance;

2) through the comparison of a person’s biometric data with the biometric data contained in a reference database; and 

3) without prior knowledge whether the targeted person will be present and can be identified. 

As the AIA is the world-first attempt of comprehensive legislative address to the challenges posed by the AI, the main problem which the European legislators faced here is that the development of technologies far outruns not only legal norms but even the moral ones.   

This problem is tried to be solved in two main ways:

1) Adopting a technological-neutral approach;

2) Adopting a flexible approach based on the general principles of law and universal legal and moral imperative. There is a general consensus that such imperative in the modern world is the fundamental human rights. Human rights repeatedly pointed out to be a ‘legal check’ for AI systems development and use, particularly in the situation of lack of specific rules. 

In the sense of digital regulation fundamental right to privacy is considered as a precondition for the protection of other human rights.   

During the webinar ‘An Orwellian Premonition: a discussion on the perils of biometric surveillance’, which was arranged by EDPS on 14 July 2021, it was emphasised that the right to privacy includes the right to be let alone as well as the right to anonymity in a publicly accessible place. 

In the Resolution on 6 October 2021 ‘Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters’ the European Parliament stressed that ‘individuals not only have the right to be correctly identified, but they also have the right not to be identified at all, unless it is required by law for compelling and legitimate public interests’.

The Consultative Committee of the Convention 108 for the protection of individuals with regard to automatic processing of personal data stressed in its Guidelines on facial recognition: ‘Integrating facial recognition technologies into existing surveillance systems poses a serious risk to the rights to privacy and protection of personal data, as well as to other fundamental rights’.

In Joint Opinion 5/2021 on the AIA 18 June 2021 EDPB and EDPS highlighted that remote biometric identification in publicly accessible spaces poses a high risk of intrusion into individuals’ private lives with severe effects on the populations’ expectation of being anonymous in public spaces, resulting in a direct negative effect on the exercising of freedom of expression, assembly, association as well as freedom of movement. This might lead to serious proportionality problems since it might involve the processing of data from an indiscriminate and disproportionate number of data subjects. 

It’s worth noting that AIA pays particular attention to the problem of AI-driven biometric surveillance. One of the most disputable points of the draft is the scope of the planned prohibition of such systems and potential exceptions from it. EDPB and EDPS have stressed that this ban should cover not only facial recognition but also all other types of it, including recognition based on behavioural characteristics.  

On 14 June 2023, MEPs adopted the amended text of the AI Act. The new redaction demonstrates the evolvement of the Parliament’s position towards more comprehensive ban on remote biometric recognition. Article 5(1)(d) prohibits the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces. All exceptions from the ban were excluded from the new redaction. Article 5(1)(db) bans placing on the market, putting into service or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. Obviously, this total ban covers systems like ClearviewIA and products of Russian NTechLab. Article 5(1)(e) limits the access to the market for ‘post’ remote biometric identification systems unless they are subject to a pre-judicial authorisation according to the Union law and strictly necessary for the targeted search connected to a specific serious criminal offence.

This list of prohibitions of biometric recognition systems is not exhaustive, because Article 5(1a) refers to additional sectoral prohibitions or prohibitions by general principles of law.

Considering the wider (in comparison with the GDPR) extraterritorial effect inherited in article 2, the AIA will probably have an enormous influence on the global AI industry.   

British specific

The United Kingdom adopts a much more relaxed approach to state access to personal data and state surveillance in comparison with continental Europe. The President of the United Kingdom Supreme Court Lord Reed of Allermuir emphasised and clearly explained this in T & Anor, R (on the application of) v Secretary of State for the Home Department & Anor [2014] UKSC: “The United Kingdom has never had a secret police or internal intelligence agency comparable to those that have existed in some other European countries, the East German Stasi being a well-known example. There has however been growing concern in recent times about surveillance and the collection and use of personal data by the state. … But such concern on this side of the Channel might be said to have arisen later, and to be less acutely felt, than in many other European countries, where for reasons of history there has been a more vigilant attitude towards state surveillance. That concern and vigilance are reflected in the jurisprudence of the European Court of Human Rights in relation to the collection, storage and use by the state of personal data. The protection offered by the common law in this area has, by comparison, been of a limited nature”.

However, in the UK biometric recognition is a subject to independent supervision by the Information Commissioner’s Office, Surveillance Camera Commissioner and Biometric Commissioner as well as assessment according to Article 8 ECHR criteria.  

In the well-known case R (on the application of Bridges) v Chief Constable of South Wales Police the England and Wales High Court and later the England and Wales Court of Appeal recognised a biometric template resulting from processing of a digital image through a mathematical algorithm to be biometric data. Lord Justice Haddon-Cave and Mr. Justice Swift explained in their judgement (§ 133), that biometric template was ‘compared to other biometric templates in order to provide information about whether one image is like the other. That process of comparison could only take place if each template uniquely identifies the individual to which it relates’.

In this case, the Court of Appeal found Article 8 ECHR criterion of lawfulness failed because the Police used AFR (Automatic Face Recognition) lacking special legislation provisions about grounds, places and procedures of AFR deployment. 

The court also endorsed the practice of ‘watchlists’ of wanted people, pointing out that indiscriminate mass surveillance is unlawful and extends beyond the requirements of proportionality. 

 

Chinese approach

China is well-known for its mass digital surveillance programs. However, despite this, China, likewise the EU and the UK, follows the general trend of legal limitation of biometric recognition.    

According to Article 26 of the PIPL (Personal Information Protection Law of the People’s Republic of China), which came into force on 1 November 2021, the installation of image collection or personal identity recognition equipment in public places is allowed only if required for public security and should comply with relevant State regulations, and clear indicating signs shall be installed. Collected personal images and personal identity characteristics can only be used to safeguard public security; it may not be used for other purposes unless when individuals’ separate consent is obtained.

Noteworthy, on 8 August 2023 the Cyberspace Administration of China released for public discussion the draft of Provisions on the face recognition technology application of the safety management

According to the draft, China is planning to restrict businesses’ use of facial recognition technology in favour of non-biometric methods (Article 6). Particularly, airports, hotels, stations, banks, stadiums, exhibition halls and other business establishments shall not use facial recognition to verify personal identity, unless required by law.

According to the Article 3 of the draft use of face recognition technology should comply with the laws and regulations, comply with public order and respect social ethics, social responsibility, and the fulfilment of personal information protection obligations. 

If facial recognition is used, the proposed rules encourage the use of national systems such as the National Population-Based Information Library and National Network Authentication Public Service (Article 4).

Article 11 prohibits any organization or individual from using face recognition technology to analyse individual race, ethnicity, religious beliefs, health status, social class and other sensitive personal information unless it is required for the maintenance of national security and public safety, or to protect life, health and property safety in emergency circumstances.

Article 15 requires to carry out documented personal information impact assessment before implementing facial recognition.  

 

Call to prohibition on the international level

The global concern about AI-driven biometric recognition has been expressed not only in restrictive regulatory trends but also in positions of international organisations.

UN High Commissioner for Human Rights Michelle Bachelet in her report on 13 September 2021 ‘The right to privacy in the digital age called for the moratorium on the use of biometric recognition systems in publicly accessible places as well as remote real-time facial recognition until states ensure that fundamental human rights are guaranteed. She also welcomed steps to limit or ban such technologies.   

She emphasised that remote biometric recognition is linked to deep interference with the right to privacy: ‘A person’s biometric information constitutes one of the key attributes of her or his personality as it reveals unique characteristics distinguishing her or him from other persons. Remote biometric recognition dramatically increases the ability of State authorities to systematically identify and track individuals in public spaces, undermining the ability of people to go about their lives unobserved and resulting in a direct negative effect on the exercise of the rights to freedom of expression, of peaceful assembly and association, as well as freedom of movement’. 

Therefore, the widespread AI-driven biometric recognition has launched a new stem of regulatory policy deriving from the concern about a new type of serious threat to fundamental rights.  The general direction of this trend may be clearly identified: governments realise the threat posed by AI-driven biometric surveillance and intend to limit it and put it under control.  

Two main approaches to limit biometric surveillance could be distinguished at the global level: European and Chinese. The first one addresses both sources of threats: state and non-state actors, while the second one tries to put biometric surveillance under complete state control.    

Interestingly, adhering to different basic values, European and Chinese legislators sometimes adopt quite similar means to achieve their goals, even though they apply in different legal, political, social and economic environments. 

Being historically precedential, European approach is the most coherent and consistent. It relies on long-established legal theory and solid practical background and scrupulously defines basic terms and concepts. Notably, Chinese legislation borrows European terminology and logical constructions almost completely.

See more related posts »

Related blog posts