Data protection and AI

AI is considered a key technology for the future, enabling data-based and automated decision-making. The way data is collected, used and structured to arrive at these decisions is anything but neutral and has many legal implications. Let’s take a look at an example from HR.

Testo: Maya Dougoud, pubblicato il 04.05.2021

The new applications that artificial intelligence (AI) systems enable are currently counted among the most promising developments in digitalisation.

This huge transformative potential is based on collecting and managing a vast volume of data.  Since data plays such a key role, AI raises fundamental, legal and ethical questions and brings with it a whole host of challenges, hence the link between AI-based technologies and data protection.

It is inevitable that organisations engaged in digitalisation will use AI. It is also a necessary tool for making fast, relevant, objective and inclusive decisions. Yet integrity and quality assurance are based on good governance and trust. But the High-Level Expert Group on Artificial Intelligence has declared three components to be absolutely vital if AI is to be trusted when considering the implementation of AI, its broad range of applications and the fields concerned, not to mention its architecture. Indeed, AI must intrinsically abide by the law, meet ethical requirements and be based on a robust structure.

Considering how AI works, data protection legislation is becoming increasingly important among these requirements, and technical and organisational measures are taking on a whole new dimension.

AI in the world of work

AI has already made its presence strongly felt in the world of work – in the private sector and the public sector alike. When AI is used for labour and/or staff administration, the issues are important too, although they are more latent. Indeed, by using AI technology for talent acquisition and selection purposes, the private and public organisations doing so are being held accountable with respect to fundamental rights violations in view of the fact that there are biases present in the AI code itself and in the results of the decisions it makes. These fundamental rights include equality, non-discrimination and privacy protection.

Using AI for recruitment purposes in Switzerland

When recruiting to fill vacancies in Switzerland, many Swiss employers engage the services of foreign subcontractors, who manage online applicant pre-selection forms. Billed as completely compliant and objective, these forms reflect inclusive staff policies and don’t require any photos, personal details or family information, but they do in fact apply inquisitive algorithms that draw highly sensitive information from various spheres such as social networks. This AI equipment makes decisions by collecting, transferring, compiling, interpreting and using data concerning the applicants’ employability, including work experience, evaluations, medical conditions, family circumstances, criminal record, ethnic origin, religious beliefs, sexual orientation, gender and political views. This information includes strictly personal, confidential and highly sensitive data that applicants have not clearly, transparently and explicitly consented to providing.

Diversity and inclusion

Given the biases in these tools, the algorithms will catch talented individuals – like young women of colour – in their nets, presuming them to be incompetent and leaving them unable to get a job that matches their qualifications and skill sets. Although applicants submit their applications without giving any indication of their age, gender or ethnic origin, the tool will set applicants’ parameters based on information obtained automatically by trawling through their digital environment. All of this is done without authorisation and without voluntary, informed and express consent.

Not only does this process violate the most fundamental rights; it also infringes Data Protection and its principles of purpose, transparency and minimisation, not to mention the fact that it is based on a lack of explicit consent from applicants. The situation becomes clearer if we take the presence of biases into account and the fact that individuals are categorised into groups with discriminating indicators (race, ethnic origin, religious beliefs or sexual orientation). So, in this case, not only are applicants’ fundamental and constitutional rights being affected; standards set out in criminal law (including, for example, Art. 261 bis of the Swiss Criminal Code) may apply too.

Using automated decision-making in HR

It is important for HR to review and be aware of the challenges posed by these tools, as described. It is crucial to have complete and transparent information, to read the tools’ general terms and conditions and the descriptions of how they work and to understand the issues at stake to be able to balance interests by assuming responsibility for the choices made by the tools.

Human intervention does form part of how AI works, whichever way you look at it, and we cannot deny our responsibility.

It is vital to understand what the biases are when using such tools, what role human intervention has to play, what technical and organisational measures are integrated within AI and what technical and organisational measures are involved in using AI in a given environment.

So HR is responsible for training and educating personnel, raising staff awareness and making everyone accountable for the risks that these tools pose. Indeed, human intervention does form part of how AI works, whichever way you look at it, and we cannot deny our responsibility.

Ethical issues

According to a United Nations Chronicle, we are faced with a crucial question: what kind of society do we want for tomorrow? The same source states that: ‘AI is humanity’s new frontier. Once this boundary is crossed, AI will lead to a new form of human civilisation. The guiding principle of AI is not to become autonomous or replace human intelligence.’

Considering the implications and issues, many organisations have included important values – such as ensuring that AI technologies are subject to human management and monitoring and avoiding the creation or reinforcement of unfair biases – in their codes of ethics.

Specifically, HR must ensure that the human factor is enshrined in the HR policy’s technical and organisational measures by protecting, respecting and cultivating the principles of fundamental rights, as required by data protection legislation. HR is expected to address diversity and inclusion initiatives that deal with biases and discrimination, transforming both the organisation and corporate culture. For employers, overlapping private, public and criminal responsibilities are at stake.

HR must ensure that staff management is developed in line with a human-centric, values-based and human rights-based approach. This recommendation builds on the work done in this field by the Council of Europe and other international organisations. It is rooted in the existing universal, binding and enforceable framework provided by the international human rights system.

To sum up, human-centric AI is indeed an option. This requires people to be protected by ethics, transparency, information and an effective solution to AI usage that is recognised by everyone involved in safeguarding human rights at European level and internationally.

More on the topic of discrimination and AI:

Fay Parris, Deborah Diallo, Monika Pfaffinger and I held a talk at the IAPP European Data Protection Intensive Online 2021 on this topic. The presentation is available on the IAPP website.

 

Sull’autore
Maya   Dougoud

Maya Dougoud

Maya Dougoud works as a legal counsel in academia (University of Applied Sciences and Arts Western Switzerland in Fribourg and SWITCH). She also conducts research as part of her work with the HumanTech Institute of the School of Engineering and Architecture Fribourg and as an associate professor at the School of Management Fribourg. Having trained at the University of Fribourg’s Faculty of Law, she worked in a bilingual law firm specialising in business law, then in the Office of the Public Prosecutor. She is also involved in social projects through her association, 38,5, and as co-president of StrukturELLE.

E-mail
Tags
Community
Altri contributi