Explainable AI was born as a pathway to allow humans to explore and understand the inner working of complex systems. Though, establishing what is an explanation and objectively evaluating explainability, are not trivial tasks. With this paper, we present a new model-agnostic metric to measure the Degree of Explainability of (correct) information in an objective way, exploiting a specific theoretical model from Ordinary Language Philosophy called the Achinstein’s Theory of Explanations, implemented with an algorithm relying on deep language models for knowledge graph extraction and information retrieval. In order to understand whether this metric is actually behaving as explainability is expected to, we have devised an experiment on two realistic Explainable AI-based systems for healthcare and finance, using famous AI technology including Artificial Neural Networks and TreeSHAP. The results we obtained suggest that our proposed metric for measuring the Degree of Explainability is robust on several scenarios

How to Quantify the Degree of Explainability: Experiments and Practical Implications / Sovrano, Francesco; Vitali, Fabio. - ELETTRONICO. - (2022), pp. 1-9. (Intervento presentato al convegno 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) tenutosi a Padua, Italy nel 18-23 July 2022) [10.1109/FUZZ-IEEE55066.2022.9882574].

How to Quantify the Degree of Explainability: Experiments and Practical Implications

Sovrano, Francesco
;
Vitali, Fabio
2022

Abstract

Explainable AI was born as a pathway to allow humans to explore and understand the inner working of complex systems. Though, establishing what is an explanation and objectively evaluating explainability, are not trivial tasks. With this paper, we present a new model-agnostic metric to measure the Degree of Explainability of (correct) information in an objective way, exploiting a specific theoretical model from Ordinary Language Philosophy called the Achinstein’s Theory of Explanations, implemented with an algorithm relying on deep language models for knowledge graph extraction and information retrieval. In order to understand whether this metric is actually behaving as explainability is expected to, we have devised an experiment on two realistic Explainable AI-based systems for healthcare and finance, using famous AI technology including Artificial Neural Networks and TreeSHAP. The results we obtained suggest that our proposed metric for measuring the Degree of Explainability is robust on several scenarios
2022
2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
1
9
How to Quantify the Degree of Explainability: Experiments and Practical Implications / Sovrano, Francesco; Vitali, Fabio. - ELETTRONICO. - (2022), pp. 1-9. (Intervento presentato al convegno 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) tenutosi a Padua, Italy nel 18-23 July 2022) [10.1109/FUZZ-IEEE55066.2022.9882574].
Sovrano, Francesco; Vitali, Fabio
File in questo prodotto:
File Dimensione Formato  
manuscript.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 476.38 kB
Formato Adobe PDF
476.38 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/894604
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 1
social impact