EXplainable AI (XAI) systems are designed to provide clear explanations of how the system arrived at a decision or prediction, which increases users' trust. However, the factors that promote trust among XAI users, the different dimensions of trust, and how they affect the human-AI relationship are still under exploration. Through a preliminary literature review, this paper aims to collect the most recent empirical evidence (n=13) that investigates the nexus between XAI and users' trust, highlighting the most salient factors shaping this relationship. The studies measured XAI, including understandability, informativeness, and system design factors. Different scales were used, such as Likert scales and preexperimental surveys, as well as more nuanced approaches like image classification AI and focus groups. Trust in AI was evaluated through criteria like trustworthiness and scales for agreement with statements about trust, even if some studies adopted methods like latent trust evaluations, observational measures, and usability tests. The studies collectively suggest that various factors such as clear explanations, perceived understanding of AI, transparency, reliability, fairness, user-centeredness, emotional responses, and design elements of the system influence trust in AI. Low-fidelity explanations, feelings of fear or discomfort, and low perceived usefulness can decrease trust, with systems displaying medium accuracy or utilizing visual explanations not adversely affecting user trust. Explainability methods like PDP and LIME appear effective at increasing user trust, while SHAP explanations perform less well. To foster trust, AI developers should prioritize designs considering both cognitive and affective trust-building aspects.
Morandini S., Fraboni F., Puzzo G., Giusino D., Volpi L., Brendel H., et al. (2023). Examining the Nexus between Explainability of AI Systems and User's Trust: A Preliminary Scoping Review. Dublin : CEUR-WS.
Examining the Nexus between Explainability of AI Systems and User's Trust: A Preliminary Scoping Review
Morandini S.;Fraboni F.;Puzzo G.;Giusino D.;Volpi L.;Brendel H.;De Angelis M.;De Cesarei A.;Pietrantoni L.
2023
Abstract
EXplainable AI (XAI) systems are designed to provide clear explanations of how the system arrived at a decision or prediction, which increases users' trust. However, the factors that promote trust among XAI users, the different dimensions of trust, and how they affect the human-AI relationship are still under exploration. Through a preliminary literature review, this paper aims to collect the most recent empirical evidence (n=13) that investigates the nexus between XAI and users' trust, highlighting the most salient factors shaping this relationship. The studies measured XAI, including understandability, informativeness, and system design factors. Different scales were used, such as Likert scales and preexperimental surveys, as well as more nuanced approaches like image classification AI and focus groups. Trust in AI was evaluated through criteria like trustworthiness and scales for agreement with statements about trust, even if some studies adopted methods like latent trust evaluations, observational measures, and usability tests. The studies collectively suggest that various factors such as clear explanations, perceived understanding of AI, transparency, reliability, fairness, user-centeredness, emotional responses, and design elements of the system influence trust in AI. Low-fidelity explanations, feelings of fear or discomfort, and low perceived usefulness can decrease trust, with systems displaying medium accuracy or utilizing visual explanations not adversely affecting user trust. Explainability methods like PDP and LIME appear effective at increasing user trust, while SHAP explanations perform less well. To foster trust, AI developers should prioritize designs considering both cognitive and affective trust-building aspects.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.