Analysing the EU Ai Act’s Treatment of Algorithmic Discrimination

Authors

  • Keketso Kgomosotho University of Vienna

DOI:

https://doi.org/10.25365/vlr-2025-9-1-76

Keywords:

EU Ai Act, algorithmic discrimination, bias, non-discrimination law, supporting role, high-risk Ai systems

Abstract

This paper examines the European Union’s Artificial Intelligence Act’s treatment of algorithmic discrimination, arguing that its structure and technical focus on bias, rather than discrimination, positions it as a supporting framework to existing Union non-discrimination law. Through a systematic review of the Act, concentrating on provisions relevant to non-discrimination, the analysis unpacks the Act’s strategy, structured around four key regulatory movements: the invocation of existing non-discrimination frameworks, an emphasis on technical bias detection and data quality criteria, the establishment of an exception to GDPR for processing special categories of data essential for bias correction, and the imposition of transparency and explainability obligations for high-risk Ai systems. Through these movements, the paper assesses the Act’s effectiveness in addressing the challenges of algorithmic proxy discrimination. The central finding is that the Act’s reliance on bias, technical requirements and processes – while necessary for minimising Ai-specific sources of discrimination – inherently limits its role to supporting the broader architecture of EU non-discrimination law, rather than serving as the primary governance mechanism for discrimination.

Author Biography

Keketso Kgomosotho, University of Vienna

Keketso Kgomosotho is a doctoral researcher and Ars Iuris fellow from South Africa. His current research focus is on the intersection between Machine Learning operational logic, international legal governance and consciousness (qualia) in the context of Ai decision making. He is also an Attorney of the High Court of South Africa.

Downloads

Published

2025-11-12