This study introduces Pro-Judice, a benchmark to assess how large language models align with procedural fairness in judicial contexts. Using criminal case data from China and US legal traditions, it reveals that models' procedural fairness perceptions vary by legal context of test dataset, model architecture and version differences, and dataset-model interaction.

Chen, Q., Cheng, R., Liu, Y., Zheng, S., Rotolo, A., Liu, Y., et al. (2025). Pro-Judice: Aligning LLMs with Procedural Fairness in Judicial Contexts. IOS Press BV [10.3233/FAIA251616].

Pro-Judice: Aligning LLMs with Procedural Fairness in Judicial Contexts

Rotolo A.
;
2025

Abstract

This study introduces Pro-Judice, a benchmark to assess how large language models align with procedural fairness in judicial contexts. Using criminal case data from China and US legal traditions, it reveals that models' procedural fairness perceptions vary by legal context of test dataset, model architecture and version differences, and dataset-model interaction.
2025
Frontiers in Artificial Intelligence and Applications
385
388
Chen, Q., Cheng, R., Liu, Y., Zheng, S., Rotolo, A., Liu, Y., et al. (2025). Pro-Judice: Aligning LLMs with Procedural Fairness in Judicial Contexts. IOS Press BV [10.3233/FAIA251616].
Chen, Q.; Cheng, R.; Liu, Y.; Zheng, S.; Rotolo, A.; Liu, Y.; Shen, W.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1043602
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact