Analysing the EU Ai Act’s Treatment of Algorithmic Discrimination
DOI:
https://doi.org/10.25365/vlr-2025-9-1-76Keywords:
EU Ai Act, algorithmic discrimination, bias, non-discrimination law, supporting role, high-risk Ai systemsAbstract
This paper examines the European Union’s Artificial Intelligence Act’s treatment of algorithmic discrimination, arguing that its structure and technical focus on bias, rather than discrimination, positions it as a supporting framework to existing Union non-discrimination law. Through a systematic review of the Act, concentrating on provisions relevant to non-discrimination, the analysis unpacks the Act’s strategy, structured around four key regulatory movements: the invocation of existing non-discrimination frameworks, an emphasis on technical bias detection and data quality criteria, the establishment of an exception to GDPR for processing special categories of data essential for bias correction, and the imposition of transparency and explainability obligations for high-risk Ai systems. Through these movements, the paper assesses the Act’s effectiveness in addressing the challenges of algorithmic proxy discrimination. The central finding is that the Act’s reliance on bias, technical requirements and processes – while necessary for minimising Ai-specific sources of discrimination – inherently limits its role to supporting the broader architecture of EU non-discrimination law, rather than serving as the primary governance mechanism for discrimination.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Keketso Kgomosotho

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles are licensed under the Creative Commons License CC BY-NC-ND. A summary of the license terms can be found on the following page:
https://creativecommons.org/licenses/by-nc-nd/4.0/
Authors retain copyright without restrictions.