Online platforms have become a key infrastructure for creating and sharing content, thus representing a paramount context for the individual/collective exercise of fundamental rights (e.g., freedom of expression, association) and the realisation of social values (citizens’ information, education, democratic dialogue). At the same time, platforms offer new opportunities for unfair or harmful behaviours, such as the unauthorised distribution of copyrighted content, privacy violation, unlawful content distribution (e.g., hate speech, child pornography), and fake news. To prevent or at least mitigate the spread of such content, online platforms have been encouraged to resort to content moderation. This activity uses automated systems to govern content flows to ensure lawful and productive user interactions. These systems deploy state-of-the-art AI technologies (e.g., deep learning, NLP) to detect prohibited content and restrict its further dissemination. In this Chapter, we will address the use of automated systems in content moderation and the related regulatory aspects. Section 2 will provide a general overview of content moderation on online platforms, focusing mainly on automated filtering. Further, Sect. 3 will describe existing techniques for automatically filtering content. Section 4 will discuss some critical challenges in automated content moderation, namely vulnerability, failures in accuracy, subjectivity and discrimination. Furthermore, Sect. 5 will define some of the steps needed to regulate moderation. Finally, in Sect. 6, we will review existing legislation that addresses content moderation in online environments.
The Regulation of Content Moderation / Galli, Federico; Loreggia, Andrea; Sartor, Giovanni. - ELETTRONICO. - 57:(2023), pp. 63-87. [10.1007/978-3-031-40516-7_5]
The Regulation of Content Moderation
Galli, Federico
;Sartor, Giovanni
2023
Abstract
Online platforms have become a key infrastructure for creating and sharing content, thus representing a paramount context for the individual/collective exercise of fundamental rights (e.g., freedom of expression, association) and the realisation of social values (citizens’ information, education, democratic dialogue). At the same time, platforms offer new opportunities for unfair or harmful behaviours, such as the unauthorised distribution of copyrighted content, privacy violation, unlawful content distribution (e.g., hate speech, child pornography), and fake news. To prevent or at least mitigate the spread of such content, online platforms have been encouraged to resort to content moderation. This activity uses automated systems to govern content flows to ensure lawful and productive user interactions. These systems deploy state-of-the-art AI technologies (e.g., deep learning, NLP) to detect prohibited content and restrict its further dissemination. In this Chapter, we will address the use of automated systems in content moderation and the related regulatory aspects. Section 2 will provide a general overview of content moderation on online platforms, focusing mainly on automated filtering. Further, Sect. 3 will describe existing techniques for automatically filtering content. Section 4 will discuss some critical challenges in automated content moderation, namely vulnerability, failures in accuracy, subjectivity and discrimination. Furthermore, Sect. 5 will define some of the steps needed to regulate moderation. Finally, in Sect. 6, we will review existing legislation that addresses content moderation in online environments.File | Dimensione | Formato | |
---|---|---|---|
The regulation of content moderaiton.pdf
embargo fino al 26/08/2025
Descrizione: Accepted manuscript dell'articolo
Tipo:
Postprint
Licenza:
Licenza per accesso libero gratuito
Dimensione
630.34 kB
Formato
Adobe PDF
|
630.34 kB | Adobe PDF | Visualizza/Apri Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.