The problem of fairness in AI can be tackled by minimising bias in the data (pre-processing), in the algorithms (in-processing), or in the results (post-processing). In the particular case of in-processing applied to supervised machine learning, state-of-the-art solutions rely on a few well-known fairness metrics – e.g., demographic parity, disparate impact, or equalised odds – optimised during training—which, however, mostly focus on binary attributes and their effects on binary classification problems. Accordingly, in this work we propose FaUCI as a general purpose framework for injecting fairness constraints into neural networks (or, any model trained via stochastic gradient descent), supporting attributes of many sorts—there including binary, discrete, or continuous features. To evaluate its effectiveness and efficiency, we test FaUCI against several sorts of features and fairness metrics. Furthermore, we compare FaUCI with state-of-the-art solutions for in-processing, demonstrating its superiority.

Magnini, M., Ciatto, G., Calegari, R., Omicini, A. (2024). Enforcing Fairness via Constraint Injection with FaUCI. Aachen : CEUR-WS.

Enforcing Fairness via Constraint Injection with FaUCI

Matteo Magnini;Giovanni Ciatto;Roberta Calegari;Andrea Omicini
2024

Abstract

The problem of fairness in AI can be tackled by minimising bias in the data (pre-processing), in the algorithms (in-processing), or in the results (post-processing). In the particular case of in-processing applied to supervised machine learning, state-of-the-art solutions rely on a few well-known fairness metrics – e.g., demographic parity, disparate impact, or equalised odds – optimised during training—which, however, mostly focus on binary attributes and their effects on binary classification problems. Accordingly, in this work we propose FaUCI as a general purpose framework for injecting fairness constraints into neural networks (or, any model trained via stochastic gradient descent), supporting attributes of many sorts—there including binary, discrete, or continuous features. To evaluate its effectiveness and efficiency, we test FaUCI against several sorts of features and fairness metrics. Furthermore, we compare FaUCI with state-of-the-art solutions for in-processing, demonstrating its superiority.
2024
AEQUITAS 2024: Fairness and Bias in AI
1
13
Magnini, M., Ciatto, G., Calegari, R., Omicini, A. (2024). Enforcing Fairness via Constraint Injection with FaUCI. Aachen : CEUR-WS.
Magnini, Matteo; Ciatto, Giovanni; Calegari, Roberta; Omicini, Andrea
File in questo prodotto:
File Dimensione Formato  
paper8.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 1.26 MB
Formato Adobe PDF
1.26 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/995740
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact