Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis. However, explanation techniques are still embryotic, and they mainly target ML experts rather than heterogeneous end-users. Furthermore, existing solutions assume data to be centralised, homogeneous, and fully/continuously accessible—circumstances seldom found altogether in practice. Arguably, a system-wide perspective is currently missing. The project named “Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge ” (Expectation) aims at overcoming such limitations. This manuscript presents the overall objectives and approach of the Expectation project, focusing on the theoretical and practical advance of the state of the art of XAI towards the construction of personalised explanations in spite of decentralisation and heterogeneity of knowledge, agents, and explainees (both humans or virtual). To tackle the challenges posed by personalisation, decentralisation, and heterogeneity, the project fruitfully combines abstractions, methods, and approaches from the multi-agent systems, knowledge extraction/injection, negotiation, argumentation, and symbolic reasoning communities.

Davide Calvaresi, G.C. (2021). EXPECTATION: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge. Cham : Springer Nature [10.1007/978-3-030-82017-6_20].

EXPECTATION: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge

Giovanni Ciatto;Andrea Omicini;
2021

Abstract

Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis. However, explanation techniques are still embryotic, and they mainly target ML experts rather than heterogeneous end-users. Furthermore, existing solutions assume data to be centralised, homogeneous, and fully/continuously accessible—circumstances seldom found altogether in practice. Arguably, a system-wide perspective is currently missing. The project named “Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge ” (Expectation) aims at overcoming such limitations. This manuscript presents the overall objectives and approach of the Expectation project, focusing on the theoretical and practical advance of the state of the art of XAI towards the construction of personalised explanations in spite of decentralisation and heterogeneity of knowledge, agents, and explainees (both humans or virtual). To tackle the challenges posed by personalisation, decentralisation, and heterogeneity, the project fruitfully combines abstractions, methods, and approaches from the multi-agent systems, knowledge extraction/injection, negotiation, argumentation, and symbolic reasoning communities.
2021
Explainable and Transparent AI and Multi-Agent Systems. Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers
331
343
Davide Calvaresi, G.C. (2021). EXPECTATION: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge. Cham : Springer Nature [10.1007/978-3-030-82017-6_20].
Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, Michael I. Schumacher
File in questo prodotto:
File Dimensione Formato  
extraamas-2021-expectation.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 1.73 MB
Formato Adobe PDF
1.73 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/838535
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 5
social impact