This article discusses the concepts of illocutionary, perlocutionary, and locutionary acts, and their role in understanding explanations. Illocutionary acts concern the speaker's intended meaning, perlocutionary acts refer to the listener’s reaction, and locutionary acts are about the speech act itself. We suggest a new way to categorise established definitions of explanation based on these speech act principles. This method enhances our grasp of how explanations work. We found that if you define explanation as a perlocutionary act, it requires subjective judgements. This makes it hard to assess an explanation objectively before the listener receives it. On the other hand, we claim that existing legal systems prefer explanations based on illocutionary acts. We propose that the exact meaning of explanation depends on the situation. Some kinds of definitions suit specific circumstances better. For example, in educational settings, a perlocutionary approach often works best, while legal settings call for an illocutionary approach. Additionally, we show how current measures of explainability can be grouped based on their theoretical support and the speech act they rely on. This categorisation helps us pinpoint which measures are best for assessing the results of Explainable AI (XAI) tools in legal or other settings. In simpler terms, we are explaining how to evaluate and improve XAI and explanations in different situations, such as in education and law. By understanding where and when to apply different explainability measures, we can make better and more specialised XAI tools. This will lead to significant improvements in AI explainability.

Perlocution vs Illocution: How Different Interpretations of the Act of Explaining Impact on the Evaluation of Explanations and XAI

Sovrano F.
;
Vitali F.
2023

Abstract

This article discusses the concepts of illocutionary, perlocutionary, and locutionary acts, and their role in understanding explanations. Illocutionary acts concern the speaker's intended meaning, perlocutionary acts refer to the listener’s reaction, and locutionary acts are about the speech act itself. We suggest a new way to categorise established definitions of explanation based on these speech act principles. This method enhances our grasp of how explanations work. We found that if you define explanation as a perlocutionary act, it requires subjective judgements. This makes it hard to assess an explanation objectively before the listener receives it. On the other hand, we claim that existing legal systems prefer explanations based on illocutionary acts. We propose that the exact meaning of explanation depends on the situation. Some kinds of definitions suit specific circumstances better. For example, in educational settings, a perlocutionary approach often works best, while legal settings call for an illocutionary approach. Additionally, we show how current measures of explainability can be grouped based on their theoretical support and the speech act they rely on. This categorisation helps us pinpoint which measures are best for assessing the results of Explainable AI (XAI) tools in legal or other settings. In simpler terms, we are explaining how to evaluate and improve XAI and explanations in different situations, such as in education and law. By understanding where and when to apply different explainability measures, we can make better and more specialised XAI tools. This will lead to significant improvements in AI explainability.
2023
Explainable Artificial Intelligence
25
47
Sovrano F.; Vitali F.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/961180
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact