This article discusses the concepts of illocutionary, perlocutionary, and locutionary acts, and their role in understanding explanations. Illocutionary acts concern the speaker's intended meaning, perlocutionary acts refer to the listener’s reaction, and locutionary acts are about the speech act itself. We suggest a new way to categorise established definitions of explanation based on these speech act principles. This method enhances our grasp of how explanations work. We found that if you define explanation as a perlocutionary act, it requires subjective judgements. This makes it hard to assess an explanation objectively before the listener receives it. On the other hand, we claim that existing legal systems prefer explanations based on illocutionary acts. We propose that the exact meaning of explanation depends on the situation. Some kinds of definitions suit specific circumstances better. For example, in educational settings, a perlocutionary approach often works best, while legal settings call for an illocutionary approach. Additionally, we show how current measures of explainability can be grouped based on their theoretical support and the speech act they rely on. This categorisation helps us pinpoint which measures are best for assessing the results of Explainable AI (XAI) tools in legal or other settings. In simpler terms, we are explaining how to evaluate and improve XAI and explanations in different situations, such as in education and law. By understanding where and when to apply different explainability measures, we can make better and more specialised XAI tools. This will lead to significant improvements in AI explainability.
Sovrano F., Vitali F. (2023). Perlocution vs Illocution: How Different Interpretations of the Act of Explaining Impact on the Evaluation of Explanations and XAI. Springer [10.1007/978-3-031-44064-9_2].
Perlocution vs Illocution: How Different Interpretations of the Act of Explaining Impact on the Evaluation of Explanations and XAI
Sovrano F.
;Vitali F.
2023
Abstract
This article discusses the concepts of illocutionary, perlocutionary, and locutionary acts, and their role in understanding explanations. Illocutionary acts concern the speaker's intended meaning, perlocutionary acts refer to the listener’s reaction, and locutionary acts are about the speech act itself. We suggest a new way to categorise established definitions of explanation based on these speech act principles. This method enhances our grasp of how explanations work. We found that if you define explanation as a perlocutionary act, it requires subjective judgements. This makes it hard to assess an explanation objectively before the listener receives it. On the other hand, we claim that existing legal systems prefer explanations based on illocutionary acts. We propose that the exact meaning of explanation depends on the situation. Some kinds of definitions suit specific circumstances better. For example, in educational settings, a perlocutionary approach often works best, while legal settings call for an illocutionary approach. Additionally, we show how current measures of explainability can be grouped based on their theoretical support and the speech act they rely on. This categorisation helps us pinpoint which measures are best for assessing the results of Explainable AI (XAI) tools in legal or other settings. In simpler terms, we are explaining how to evaluate and improve XAI and explanations in different situations, such as in education and law. By understanding where and when to apply different explainability measures, we can make better and more specialised XAI tools. This will lead to significant improvements in AI explainability.File | Dimensione | Formato | |
---|---|---|---|
manuscript.pdf
Open Access dal 30/10/2024
Tipo:
Postprint
Licenza:
Licenza per accesso libero gratuito
Dimensione
288.56 kB
Formato
Adobe PDF
|
288.56 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.