The European Union’s AI Act introduces a complex framework for algorithmic transparency and explainability. This paper examines the AI Act’s explainability requirements through the lens of legal informatics. First, it provides a framework for explainability provisions in the EU digital regulation by discussing the GDPR, Digital Services Act (DSA), Digital Markets Act (DMA), and the withdrawn AI Liability Directive. Then, it identifies four interpretative dimensions of explainability under the AI Act: deployer-oriented (ensuring system transparency for appropriate use), compliance-oriented (documentation for regulatory adherence), individual-empowering (rights to contest AI-supported decisions), and oversight-oriented (tools enabling meaningful human control). These dimensions collectively form a framework of Algorithmic Knowability, which bases upon the necessity of contextual explanations tailored to diverse stakeholders: providers, deployers, regulators, and affected individuals. The study then evaluates the Knowability approach in the light of ongoing current standardization initiatives and XAI research. The paper concludes that the AI Act’s explainability provisions necessitate a shift from one-size-fits-all explanations to context-dependent approaches.

Sapienza, S., Palmirani, M. (2025). Algorithmic Knowability: A Unified Approach to Explanations in the AI Act. Cham : Springer [10.1007/978-3-032-08317-3_9].

Algorithmic Knowability: A Unified Approach to Explanations in the AI Act

Sapienza, Salvatore
;
Palmirani, Monica
2025

Abstract

The European Union’s AI Act introduces a complex framework for algorithmic transparency and explainability. This paper examines the AI Act’s explainability requirements through the lens of legal informatics. First, it provides a framework for explainability provisions in the EU digital regulation by discussing the GDPR, Digital Services Act (DSA), Digital Markets Act (DMA), and the withdrawn AI Liability Directive. Then, it identifies four interpretative dimensions of explainability under the AI Act: deployer-oriented (ensuring system transparency for appropriate use), compliance-oriented (documentation for regulatory adherence), individual-empowering (rights to contest AI-supported decisions), and oversight-oriented (tools enabling meaningful human control). These dimensions collectively form a framework of Algorithmic Knowability, which bases upon the necessity of contextual explanations tailored to diverse stakeholders: providers, deployers, regulators, and affected individuals. The study then evaluates the Knowability approach in the light of ongoing current standardization initiatives and XAI research. The paper concludes that the AI Act’s explainability provisions necessitate a shift from one-size-fits-all explanations to context-dependent approaches.
2025
Explainable Artificial Intelligence. xAI 2025.
185
209
Sapienza, S., Palmirani, M. (2025). Algorithmic Knowability: A Unified Approach to Explanations in the AI Act. Cham : Springer [10.1007/978-3-032-08317-3_9].
Sapienza, Salvatore; Palmirani, Monica
File in questo prodotto:
File Dimensione Formato  
Algorithmic Knowability - A Unified Approach to Explanations in the AI Act.pdf

accesso aperto

Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 1.05 MB
Formato Adobe PDF
1.05 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1025265
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact