The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning, but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: Typically, the role of the explainer is to provide an explanation and to adapt it to the current level of understanding of the explainee; the explainee, in turn, is expected to provide cues that guide the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI

Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems / Katharina J. Rohlfing; Philipp Cimiano; Ingrid Scharlau; Tobias Matzner; Heike M. Buhl; Hendrik Buschmeier; Elena Esposito; Angela Grimminger; Barbara Hammer; Reinhold Häb-Umbach; Ilona Horwath; Eyke Hüllermeier; Friederike Kern; Stefan Kopp; Kirsten Thommes; Axel-Cyrille Ngonga Ngomo; Carsten Schulte; Henning Wachsmuth; Petra Wagner; Britta Wrede. - In: IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS. - ISSN 2379-8920. - STAMPA. - 13:3(2021), pp. 9292993.717-9292993.728. [10.1109/TCDS.2020.3044366]

Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems

Elena Esposito;
2021

Abstract

The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning, but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: Typically, the role of the explainer is to provide an explanation and to adapt it to the current level of understanding of the explainee; the explainee, in turn, is expected to provide cues that guide the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI
2021
Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems / Katharina J. Rohlfing; Philipp Cimiano; Ingrid Scharlau; Tobias Matzner; Heike M. Buhl; Hendrik Buschmeier; Elena Esposito; Angela Grimminger; Barbara Hammer; Reinhold Häb-Umbach; Ilona Horwath; Eyke Hüllermeier; Friederike Kern; Stefan Kopp; Kirsten Thommes; Axel-Cyrille Ngonga Ngomo; Carsten Schulte; Henning Wachsmuth; Petra Wagner; Britta Wrede. - In: IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS. - ISSN 2379-8920. - STAMPA. - 13:3(2021), pp. 9292993.717-9292993.728. [10.1109/TCDS.2020.3044366]
Katharina J. Rohlfing; Philipp Cimiano; Ingrid Scharlau; Tobias Matzner; Heike M. Buhl; Hendrik Buschmeier; Elena Esposito; Angela Grimminger; Barbara Hammer; Reinhold Häb-Umbach; Ilona Horwath; Eyke Hüllermeier; Friederike Kern; Stefan Kopp; Kirsten Thommes; Axel-Cyrille Ngonga Ngomo; Carsten Schulte; Henning Wachsmuth; Petra Wagner; Britta Wrede
File in questo prodotto:
File Dimensione Formato  
Explanation_as_a_Social_Practice_Toward_a_Conceptual_Framework_for_the_Social_Design_of_AI_Systems.pdf

accesso aperto

Descrizione: Articolo in rivista
Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 1.07 MB
Formato Adobe PDF
1.07 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/831978
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 27
  • ???jsp.display-item.citation.isi??? 16
social impact