Dec 21
Privacy in the age of AI

Privacy in the age of AI

By Denis Sadovnikov, AI & Privacy Lawyer, DPO, CIPP/E, CIPM, FIP

How privacy strikes a balance between technology and human rights in the upcoming AI-driven civilisation and what is the role of privacy professionals?

AI is often called “the main technology of XXI century” as well as “the central element of a new AI-driven economy”. But on the other hand, the concern about the potential social influence of AI is rising. Particular apprehensions are connected to:

– first, the autonomy of AI (the scare that AI may escape human control is the old humanity’s nightmare), and,

– second, rule of law, human rights, dignity and democracy issues (the main fears here are connected to biases, increasing imbalance of power in favour of actors who control AI systems, dehumanisation of decision-making, decreasing human autonomy and new types of surveillance).

Definition of AI for privacy prospects

European Data Protection Supervisor (EDPS) Wojciech Wiewiórowski in his speech at the First Eurasian Data Protection Congress emphasised that the term “AI” is currently used rather as a marketing term, than a strict legal concept.

Council of Europe Ad Hoc Committee on Artificial Intelligence (CAHAI, currently replaced by the Committee on Artificial Intelligence, CAI) in its Feasibility Study pointed out, that there is no single definition of AI accepted by the scientific community. The problem of defining AI for regulation purposes is recognised by European Commission’s High-Level Expert Group on AI (AI HLEG), AI Watch, OECD, UNESCO and countless other organisations and publishers.

The main difficulty seems to be that AI is an “umbrella term”, embracing a wide range of different technologies, and, particularly, not existing, but future technologies.

Some of the proposed definitions of AI describe it through certain technologies. But technology-dependent definitions are prone to become outdated in the nearest future.

The second approach is to define AI through its ability to “display intelligent behaviour by analysing … environment and taking actions – with some degree of autonomy – to achieve specific goals” (for instance, the European Commission defines AI in such a way). A similar definition is provided by the ICRC: “computer programs that carry out tasks – often associated with human intelligence – that require cognition, planning, reasoning or learning”.

Canadian Artificial Intelligence and Data Act (AIDA) defines artificial intelligence system (système d’intelligence artificielle) as “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions” (s. 2).

In the present paper the term “AI” is used in the same meaning as it is defined by the European Commission.

What should privacy professionals bear in mind when dealing with AI?

First of all, does data protection legislation apply to AI?

As Norwegian Data Protection Authority emphasised in its report “Artificial intelligence and privacy”, there are two situations where data protection law is applicable in the context of AI:

1) When artificial intelligence is under development with the help of personal data, and

2) When it is used to analyse or reach decisions about individuals.

As utilising personal data at any stage of AI lifecycle must be considered as personal data processing, data protection legislation is fully applicable. Katharina Koerner considers the applicability of privacy legislation to the AI context in detail.

The case of Clearview AI may illustrate the applicability of data protection legislation to the AI context.

At the end of 2021, the Office of the Australian Information Commissioner (OAIC) found American face-recognition start-up Clearview AI in violation of the Australian Privacy Act for the collection of images and biometric without consent or other valid legal grounds for biometrics data processing. Shortly after, and based on a joint investigation with Australia’s OAIC, the UK Informational Commissioner’s Office (ICO) announced its intent to impose a potential fine of over £17m for the same reasons. Eventually, 23 May, 2022 the UK regulator fined Clearview £7,5m and ordered UK data to be deleted. Before long three Canadian privacy authorities as well as France’s Commission Nationale de L’Informatique et des Libertés (CNIL) ordered Clearview AI to stop processing and delete the collected data (see also). Then Sweden’s DPA, the Integritetsskyddsmyndigheten, investigated the police’s use of facial recognition without any legal basis through the AI application Clearview. The DPA declared the usage was a breach of the GDPR. 09 March 2022 Italian Garante announced the €20m fine on Clearview AI. 20 July 2022 Hellenic DPA fined Clearview AI €20m. 17 October 2022 La CNIL fined the start-up €20m.

This case demonstrates that if personal data is involved, it, in any case, should be processed lawfully, fairly and in transparent manner; and a valid legal ground for processing should be present. It also emphasises that it does not matter where data is harvested from publicly available sources (such as social media in Clearview AI case).

AI and data protection principles

Data protection principles compose the core of the privacy legal framework. However, it is a wide-recognised fact that AI by its nature challenges the majority of these key principles.

Lawfulness, fairness and transparency

The first challenge for this principle is that AI technology reproduces biases included in training data that lead to unfair and discriminatory results.

The second one is lack of transparency and explainability (so-called “black-box effect”). This effect stems from the difficulty to explain the logic of system’s decision-making and how inputing information correlates with output decision. In some cases, the explainability of the AI algorithms may be limited by the current technical level or intellectual rights and trade secrets.

The House of Lords Select Committee on Artificial Intelligence in its Report on Artificial Intelligence, AI in the UK: Ready, Willing and Able? stressed that “it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take”. On the other hand, the Committee recognised that “achieving full technical transparency is difficult, and possibly even impossible for certain kinds of AI systems in use today”, moreover, “explainable” does not mean more accurate. It is up to the law to find an appropriate balance and to reconcile this collision. Rebecca Williams in her article Rethinking Deference for Algorithmic Decision-Making pointed out that in France courts have the power to require users of automated decisions to release their source code. “Digital Republic” Law (La République numérique) also requires the publication of any source code used by government administration.

Data minimisation and purpose limitation

The principles of data minimisation and purpose limitation are challenged by the IA peculiarity that it relies (especially in the case of Machine Learning (ML)) on huge amounts of personal data. It is a household name that “AI is hungry for data”, and it is not an easy task to achieve the best result using minimum data. Additionally, the vast majority of cases where AI is developed are so-called cases of data reuse, which means that information is quite rare initially collected for this purpose, it rather was collected for other purposes and then used as training data. Data reuse is considered a necessary condition for ML models. Plenty of legal questions rise here, such as anonymisation, purpose limitation and compatibility of the further purposes with the initial ones. Data reuse, therefore directly affects the data protection legal framework in its fundamentals, including principles and legal grounds for data processing.

It is not easy to make AI meet such principles as data minimisation and purposes limitation. But some jurisdictions try to develop an approach to address this challenge. For instance, two supervising authorities who are the global leaders in this field are worth mentioning:

1) UK ICO recently developed several prominent instruments such as:

2) French CNIL provides us with:

Based on Working Party’s of the Article 29 (WP29) Opinion 03/2013 on purpose limitation, La CNIL worked out a purpose compatibility assessment test (test de compatibilité).

Why is it important to govern AI by data protection framework?

Taking into account society’s expectations of AI as well as the above-mentioned hardships in compliance with core data protection principles, it is said that the easiest way is to release AI from redundant requirements and sacrifice red tape for the sake of progress. Probably this way seems the easiest for some people, but it should be weighed against wide social prospects rather than a narrow technological one.

Broadly speaking, AI technologies may be utilised in some ways which are capable to drive privacy to an end. New types of surveillance, predictive algorithms and social scorings based on AI solutions may not leave room for people’s privacy.

So, the question is, will privacy exist in the age of AI?

Even distracting from the value of privacy itself for human beings, it should be pointed out that privacy is the first line of defence of human rights, democracy and the rule of law and, therefore, the right to autonomy and personal integrity are the core of being humans. In order to persecute and oppress people dictators (or even private actors such as tech corporations) are first to collect information about people. Surveillance is an immanent feature of any authoritarian regime. It is well-known that Hitler had to profile people before sending them to gas cameras. 14 July 2022, European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) arranged a webinar ‘An Orwellian Premonition: a discussion on the perils of biometric surveillance’. The name of the webinar and the modality of discussion referred to George Orwell’s “1984” and it was reiterated that current technologies, especially AI-based, are much more potential to control people than those Orwell described many years ago.

Inclining imbalance of power, caused by the enormous ability of AI to process huge amounts of data and make predictions, creates a huge room for abuse. AI and other technologies are capable to bring people to heel. But this may take place exactly through destroying privacy.

And now we see the raise of biometric surveillance, social scorings and manipulative algorithms.

Therefore, in the current world privacy is a true red line separating freedom and slavery. The right to privacy seems to be crucial for preserving democracy in the age of digitalisation because this right allows individuals to protect their human integrity, dignity and autonomy.

Responsible AI

The concept of responsible and human-centric AI is intended to address the above-mentioned challenges.

As Katharina Koerner pointed out, over the last few years, numerous good governance guidelines on trustworthy AI were published. Some prominent examples of responsible AI frameworks by include the OECD Recommendation on AI, the G20 non-binding principles on AI, UNESCO Recommendations on the Ethic of AI, China’s ethical guidelines for the use of AI, the Council of Europe’s Report “Towards Regulation of AI Systems”, the Ethics Guidelines for trustworthy AI from the High-Level Expert Group on AI set up by the European Commission.

Running through listed and other Responsible AI initiatives, it is easy to realise that most principles of Responsible AI may be fulfilled by following and implementing data protection principles and respecting data subject rights. Therefore, human-centric AI is first and foremost AI based on privacy, and privacy is a real driver for Responsible AI framework.

The role of privacy professionals

In this situation, privacy professionals who are at the forefront of privacy and technology and who are responsible for privacy implications to every stage of every process, including AI development, deployment and use, play an enormously important and difficult role. Our community’s mission is to put humans first in the upcoming age of AI.

See more related posts »

Related blog posts