Platform Practice and Non-Discrimination
Conceptions of Equality, Systemic Risks, and Data Bias in the DSA
DOI:
https://doi.org/10.25365/vlr-2025-9-3-124Keywords:
platform governance, platform regulation, content moderation, non-discriminationAbstract
Platforms have developed into important communicative facilities. Rules and practices of platforms define what happens on a platform, but these rules and practices are not free from discrimination. This is especially true as technical tools used for content moderation, such as hate speech detection tools or recommender systems, can transfer mechanisms of exclusion into digital spheres and might even reinforce them. The EU legislator seems to recognize the issue and repeatedly emphasizes the relevance of "non-discrimination" in the Digital Services Act (DSA). But what is “non-discrimination” when it comes to platforms? This paper addresses the conceptions of “non-discrimination” and equality within the DSA and asks how related provisions in the DSA can be put into practice. The first section of the paper examines the connection between platforms and non-discrimination, discussing how public values are being incorporated into private orders (II.). The second section of the paper shows how the term “non-discrimination” and related terms can be understood under the DSA (III.). This is followed by an analysis of specific content moderation-related provisions on the individual (IV.) and the systemic level (V.)
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Felicitas Rachinger

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles are licensed under the Creative Commons License CC BY-NC-ND. A summary of the license terms can be found on the following page:
https://creativecommons.org/licenses/by-nc-nd/4.0/
Authors retain copyright without restrictions.