This paper presents a new method for objectively measuring the explainability of textual information, such as the outputs of Explainable AI (XAI). We introduce a metric called Degree of Explainability (DoX), drawing inspiration from Ordinary Language Philosophy and Achinstein's theory of explanations. It assumes that the degree of explainability is directly proportional to the number of relevant questions that a piece of information can correctly answer. We have operationalized this concept by formalizing the DoX metric through a mathematical formula, which we have integrated into a software tool named DoXpy. DoXpy relies on pre-trained deep language models for knowledge extraction and answer retrieval in order to estimate the DoX, transforming our theoretical insights into a practical tool for real-world applications. To confirm the effectiveness and consistency of our approach, we conducted comprehensive experiments and user studies with over 190 participants. These studies evaluated the quality of explanations by healthcare and finance XAI-based software systems. Our results demonstrate a correlation between increases in objective explanation usability and increments in the DoX score. These findings suggest that the DoX metric is congruent with other mainstream explainability measures. It provides a more objective and cost-effective alternative to non-deterministic user studies. Thus, we discuss the potential of DoX as a tool to evaluate the legal compliance of XAI systems. By bridging the gap between theory and practice in Explainable AI, our work fosters transparency, understandability, and legal compliance. DoXpy and related materials have been made available online to ensure reproducibility. & COPY; 2023 Elsevier B.V. All rights reserved.

An objective metric for Explainable AI: How and why to estimate the degree of explainability / Sovrano F.; Vitali F.. - In: KNOWLEDGE-BASED SYSTEMS. - ISSN 0950-7051. - ELETTRONICO. - 278:(2023), pp. 110866.1-110866.23. [10.1016/j.knosys.2023.110866]

An objective metric for Explainable AI: How and why to estimate the degree of explainability

Sovrano F.
;
Vitali F.
2023

Abstract

This paper presents a new method for objectively measuring the explainability of textual information, such as the outputs of Explainable AI (XAI). We introduce a metric called Degree of Explainability (DoX), drawing inspiration from Ordinary Language Philosophy and Achinstein's theory of explanations. It assumes that the degree of explainability is directly proportional to the number of relevant questions that a piece of information can correctly answer. We have operationalized this concept by formalizing the DoX metric through a mathematical formula, which we have integrated into a software tool named DoXpy. DoXpy relies on pre-trained deep language models for knowledge extraction and answer retrieval in order to estimate the DoX, transforming our theoretical insights into a practical tool for real-world applications. To confirm the effectiveness and consistency of our approach, we conducted comprehensive experiments and user studies with over 190 participants. These studies evaluated the quality of explanations by healthcare and finance XAI-based software systems. Our results demonstrate a correlation between increases in objective explanation usability and increments in the DoX score. These findings suggest that the DoX metric is congruent with other mainstream explainability measures. It provides a more objective and cost-effective alternative to non-deterministic user studies. Thus, we discuss the potential of DoX as a tool to evaluate the legal compliance of XAI systems. By bridging the gap between theory and practice in Explainable AI, our work fosters transparency, understandability, and legal compliance. DoXpy and related materials have been made available online to ensure reproducibility. & COPY; 2023 Elsevier B.V. All rights reserved.
2023
An objective metric for Explainable AI: How and why to estimate the degree of explainability / Sovrano F.; Vitali F.. - In: KNOWLEDGE-BASED SYSTEMS. - ISSN 0950-7051. - ELETTRONICO. - 278:(2023), pp. 110866.1-110866.23. [10.1016/j.knosys.2023.110866]
Sovrano F.; Vitali F.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/961166
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
social impact