EXplainable AI (XAI) systems are designed to provide clear explanations of how the system arrived at a decision or prediction, which increases users' trust. However, the factors that promote trust among XAI users, the different dimensions of trust, and how they affect the human-AI relationship are still under exploration. Through a preliminary literature review, this paper aims to collect the most recent empirical evidence (n=13) that investigates the nexus between XAI and users' trust, highlighting the most salient factors shaping this relationship. The studies measured XAI, including understandability, informativeness, and system design factors. Different scales were used, such as Likert scales and preexperimental surveys, as well as more nuanced approaches like image classification AI and focus groups. Trust in AI was evaluated through criteria like trustworthiness and scales for agreement with statements about trust, even if some studies adopted methods like latent trust evaluations, observational measures, and usability tests. The studies collectively suggest that various factors such as clear explanations, perceived understanding of AI, transparency, reliability, fairness, user-centeredness, emotional responses, and design elements of the system influence trust in AI. Low-fidelity explanations, feelings of fear or discomfort, and low perceived usefulness can decrease trust, with systems displaying medium accuracy or utilizing visual explanations not adversely affecting user trust. Explainability methods like PDP and LIME appear effective at increasing user trust, while SHAP explanations perform less well. To foster trust, AI developers should prioritize designs considering both cognitive and affective trust-building aspects.

Morandini S., Fraboni F., Puzzo G., Giusino D., Volpi L., Brendel H., et al. (2023). Examining the Nexus between Explainability of AI Systems and User's Trust: A Preliminary Scoping Review. Dublin : CEUR-WS.

Examining the Nexus between Explainability of AI Systems and User's Trust: A Preliminary Scoping Review

Morandini S.;Fraboni F.;Puzzo G.;Giusino D.;Volpi L.;Brendel H.;De Angelis M.;De Cesarei A.;Pietrantoni L.
2023

Abstract

EXplainable AI (XAI) systems are designed to provide clear explanations of how the system arrived at a decision or prediction, which increases users' trust. However, the factors that promote trust among XAI users, the different dimensions of trust, and how they affect the human-AI relationship are still under exploration. Through a preliminary literature review, this paper aims to collect the most recent empirical evidence (n=13) that investigates the nexus between XAI and users' trust, highlighting the most salient factors shaping this relationship. The studies measured XAI, including understandability, informativeness, and system design factors. Different scales were used, such as Likert scales and preexperimental surveys, as well as more nuanced approaches like image classification AI and focus groups. Trust in AI was evaluated through criteria like trustworthiness and scales for agreement with statements about trust, even if some studies adopted methods like latent trust evaluations, observational measures, and usability tests. The studies collectively suggest that various factors such as clear explanations, perceived understanding of AI, transparency, reliability, fairness, user-centeredness, emotional responses, and design elements of the system influence trust in AI. Low-fidelity explanations, feelings of fear or discomfort, and low perceived usefulness can decrease trust, with systems displaying medium accuracy or utilizing visual explanations not adversely affecting user trust. Explainability methods like PDP and LIME appear effective at increasing user trust, while SHAP explanations perform less well. To foster trust, AI developers should prioritize designs considering both cognitive and affective trust-building aspects.
2023
Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)
30
35
Morandini S., Fraboni F., Puzzo G., Giusino D., Volpi L., Brendel H., et al. (2023). Examining the Nexus between Explainability of AI Systems and User's Trust: A Preliminary Scoping Review. Dublin : CEUR-WS.
Morandini S.; Fraboni F.; Puzzo G.; Giusino D.; Volpi L.; Brendel H.; Balatti E.; De Angelis M.; De Cesarei A.; Pietrantoni L.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/950855
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact