This study introduces Pro-Judice, a benchmark to assess how large language models align with procedural fairness in judicial contexts. Using criminal case data from China and US legal traditions, it reveals that models' procedural fairness perceptions vary by legal context of test dataset, model architecture and version differences, and dataset-model interaction.
Chen, Q., Cheng, R., Liu, Y., Zheng, S., Rotolo, A., Liu, Y., et al. (2025). Pro-Judice: Aligning LLMs with Procedural Fairness in Judicial Contexts. IOS Press BV [10.3233/FAIA251616].
Pro-Judice: Aligning LLMs with Procedural Fairness in Judicial Contexts
Rotolo A.
;
2025
Abstract
This study introduces Pro-Judice, a benchmark to assess how large language models align with procedural fairness in judicial contexts. Using criminal case data from China and US legal traditions, it reveals that models' procedural fairness perceptions vary by legal context of test dataset, model architecture and version differences, and dataset-model interaction.File in questo prodotto:
Eventuali allegati, non sono esposti
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


