The use of machine and deep learning techniques to predict outcomes in legal proceedings is a highly debated topic among legal scholars and policymakers. These technologies have the potential to support judicial decision-making, assist litigants, and analyze biases within the legal process. However, challenges remain, notably in the reluctance of judges to adopt such tools due to concerns over judicial independence, the normative correctness, accuracy, and robustness of algorithmic decisions, and the transparency of AI systems. It is claimed that methods are needed to validate AI-based judicial predication mechanisms. This paper contributes to addressing these challenges by developing a general framework for judicial case-based reasoning (CBR) grounded in Defeasible Logic and Argumentation Semantics. We explore legal CBR, focusing on inconsistencies and incomplete knowledge within case bases, and emphasize the importance of normative explanations to ensure transparency and justification in legal decision-making. By reconstructing CBR and normative explanations within an argumentation framework, we provide a formal mechanism for validating AI-based judicial predictions.
Di Florio, C., Rotolo, A. (2024). Judicial Explanations. Berlin : Springer [10.1007/978-3-031-72407-7_8].
Judicial Explanations
Di Florio, Cecilia
;Rotolo, Antonino
2024
Abstract
The use of machine and deep learning techniques to predict outcomes in legal proceedings is a highly debated topic among legal scholars and policymakers. These technologies have the potential to support judicial decision-making, assist litigants, and analyze biases within the legal process. However, challenges remain, notably in the reluctance of judges to adopt such tools due to concerns over judicial independence, the normative correctness, accuracy, and robustness of algorithmic decisions, and the transparency of AI systems. It is claimed that methods are needed to validate AI-based judicial predication mechanisms. This paper contributes to addressing these challenges by developing a general framework for judicial case-based reasoning (CBR) grounded in Defeasible Logic and Argumentation Semantics. We explore legal CBR, focusing on inconsistencies and incomplete knowledge within case bases, and emphasize the importance of normative explanations to ensure transparency and justification in legal decision-making. By reconstructing CBR and normative explanations within an argumentation framework, we provide a formal mechanism for validating AI-based judicial predictions.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


