Although popular and effective, large language models (LLM) are characterised by a performance vs. transparency trade-off that hinders their applicability to sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations recently proposed by the XAI community. However, to the best of our knowledge, a thorough comparison among available explainability techniques is currently missing, mainly for the lack of a general metric to measure their benefits. We compare state-of-the-art local post-hoc explanation mechanisms for models trained over moral value classification tasks based on a measure of correlation. By relying on a novel framework for comparing global impact scores, our experiments show how most local post-hoc explainers are loosely correlated, and highlight huge discrepancies in their results—their “quarrel” about explanations. Finally, we compare the impact scores distribution obtained from each local post-hoc explainer with human-made dictionaries, and point out that there is no correlation between explanation outputs and the concepts humans consider as salient.

The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing / Andrea Agiollo, Luciano C. Siebert, Pradeep K. Murukannaiah, Andrea Omicini. - STAMPA. - 14127:(2023), pp. 97-115. (Intervento presentato al convegno Explainable and Transparent AI and Multi-Agent Systems tenutosi a London, UK nel 29/5/2023—2/6/2023) [10.1007/978-3-031-40878-6_6].

The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing

Andrea Agiollo;Andrea Omicini
2023

Abstract

Although popular and effective, large language models (LLM) are characterised by a performance vs. transparency trade-off that hinders their applicability to sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations recently proposed by the XAI community. However, to the best of our knowledge, a thorough comparison among available explainability techniques is currently missing, mainly for the lack of a general metric to measure their benefits. We compare state-of-the-art local post-hoc explanation mechanisms for models trained over moral value classification tasks based on a measure of correlation. By relying on a novel framework for comparing global impact scores, our experiments show how most local post-hoc explainers are loosely correlated, and highlight huge discrepancies in their results—their “quarrel” about explanations. Finally, we compare the impact scores distribution obtained from each local post-hoc explainer with human-made dictionaries, and point out that there is no correlation between explanation outputs and the concepts humans consider as salient.
2023
Explainable and Transparent AI and Multi-Agent Systems
97
115
The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing / Andrea Agiollo, Luciano C. Siebert, Pradeep K. Murukannaiah, Andrea Omicini. - STAMPA. - 14127:(2023), pp. 97-115. (Intervento presentato al convegno Explainable and Transparent AI and Multi-Agent Systems tenutosi a London, UK nel 29/5/2023—2/6/2023) [10.1007/978-3-031-40878-6_6].
Andrea Agiollo, Luciano C. Siebert, Pradeep K. Murukannaiah, Andrea Omicini
File in questo prodotto:
File Dimensione Formato  
main.pdf

embargo fino al 05/09/2024

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 843.4 kB
Formato Adobe PDF
843.4 kB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/940734
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact