Explainable AI was born as a pathway to allow humans to explore and understand the inner working of complex systems. Though, establishing what is an explanation and objectively evaluating explainability, are not trivial tasks. With this paper, we present a new model-agnostic metric to measure the Degree of Explainability of (correct) information in an objective way, exploiting a specific theoretical model from Ordinary Language Philosophy called the Achinstein’s Theory of Explanations, implemented with an algorithm relying on deep language models for knowledge graph extraction and information retrieval. In order to understand whether this metric is actually behaving as explainability is expected to, we have devised an experiment on two realistic Explainable AI-based systems for healthcare and finance, using famous AI technology including Artificial Neural Networks and TreeSHAP. The results we obtained suggest that our proposed metric for measuring the Degree of Explainability is robust on several scenarios
Sovrano, F., Vitali, F. (2022). How to Quantify the Degree of Explainability: Experiments and Practical Implications [10.1109/FUZZ-IEEE55066.2022.9882574].
How to Quantify the Degree of Explainability: Experiments and Practical Implications
Sovrano, Francesco
;Vitali, Fabio
2022
Abstract
Explainable AI was born as a pathway to allow humans to explore and understand the inner working of complex systems. Though, establishing what is an explanation and objectively evaluating explainability, are not trivial tasks. With this paper, we present a new model-agnostic metric to measure the Degree of Explainability of (correct) information in an objective way, exploiting a specific theoretical model from Ordinary Language Philosophy called the Achinstein’s Theory of Explanations, implemented with an algorithm relying on deep language models for knowledge graph extraction and information retrieval. In order to understand whether this metric is actually behaving as explainability is expected to, we have devised an experiment on two realistic Explainable AI-based systems for healthcare and finance, using famous AI technology including Artificial Neural Networks and TreeSHAP. The results we obtained suggest that our proposed metric for measuring the Degree of Explainability is robust on several scenariosFile | Dimensione | Formato | |
---|---|---|---|
manuscript.pdf
accesso aperto
Tipo:
Postprint
Licenza:
Licenza per accesso libero gratuito
Dimensione
476.38 kB
Formato
Adobe PDF
|
476.38 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.