Through the General Data Protection Regulation (GDPR), the European Union has set out its vision for Automated Decision- Making (ADM) and AI, which must be reliable and human-centred. In particular we are interested on the Right to Explanation, that requires industry to produce explanations of ADM. The High-Level Expert Group on Artificial Intelligence (AI-HLEG), set up to support the implementation of this vision, has produced guidelines discussing the types of explanations that are appropriate for user-centred (interactive) Explanatory Tools. In this paper we propose our version of Explanatory Narratives (EN), based on user-centred concepts drawn from ISO 9241, as a model for user-centred explanations aligned with the GDPR and the AI-HLEG guidelines. Through the use of ENs we convert the problem of generating explanations for ADM into the identification of an appropriate path over an Explanatory Space, allowing explainees to interactively explore it and produce the explanation best suited to their needs. To this end we list suitable exploration heuristics, we study the properties and structure of explanations, and discuss the proposed model identifying its weaknesses and strengths.

Sovrano, F., Vitali, F., Palmirani, M. (2020). Modelling GDPR-Compliant Explanations for Trustworthy AI. Cham : Springer [10.1007/978-3-030-58957-8_16].

Modelling GDPR-Compliant Explanations for Trustworthy AI

Sovrano, Francesco
;
Vitali, Fabio
;
Palmirani, Monica
2020

Abstract

Through the General Data Protection Regulation (GDPR), the European Union has set out its vision for Automated Decision- Making (ADM) and AI, which must be reliable and human-centred. In particular we are interested on the Right to Explanation, that requires industry to produce explanations of ADM. The High-Level Expert Group on Artificial Intelligence (AI-HLEG), set up to support the implementation of this vision, has produced guidelines discussing the types of explanations that are appropriate for user-centred (interactive) Explanatory Tools. In this paper we propose our version of Explanatory Narratives (EN), based on user-centred concepts drawn from ISO 9241, as a model for user-centred explanations aligned with the GDPR and the AI-HLEG guidelines. Through the use of ENs we convert the problem of generating explanations for ADM into the identification of an appropriate path over an Explanatory Space, allowing explainees to interactively explore it and produce the explanation best suited to their needs. To this end we list suitable exploration heuristics, we study the properties and structure of explanations, and discuss the proposed model identifying its weaknesses and strengths.
2020
Electronic Government and the Information Systems Perspective
219
233
Sovrano, F., Vitali, F., Palmirani, M. (2020). Modelling GDPR-Compliant Explanations for Trustworthy AI. Cham : Springer [10.1007/978-3-030-58957-8_16].
Sovrano, Francesco; Vitali, Fabio; Palmirani, Monica
File in questo prodotto:
File Dimensione Formato  
502031_1_En_Print.indd.pdf

accesso riservato

Tipo: Versione (PDF) editoriale
Licenza: Licenza per accesso riservato
Dimensione 1.44 MB
Formato Adobe PDF
1.44 MB Adobe PDF   Visualizza/Apri   Contatta l'autore
camera ready post print.pdf

Open Access dal 07/09/2021

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 1.59 MB
Formato Adobe PDF
1.59 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/773144
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 11
social impact