The problem of fairness in AI can be tackled by minimising bias in the data (pre-processing), in the algorithms (in-processing), or in the results (post-processing). In the particular case of in-processing applied to supervised machine learning, state-of-the-art solutions rely on a few well-known fairness metrics – e.g., demographic parity, disparate impact, or equalised odds – optimised during training—which, however, mostly focus on binary attributes and their effects on binary classification problems. Accordingly, in this work we propose FaUCI as a general purpose framework for injecting fairness constraints into neural networks (or, any model trained via stochastic gradient descent), supporting attributes of many sorts—there including binary, discrete, or continuous features. To evaluate its effectiveness and efficiency, we test FaUCI against several sorts of features and fairness metrics. Furthermore, we compare FaUCI with state-of-the-art solutions for in-processing, demonstrating its superiority.
Matteo Magnini, G.C. (2024). Enforcing Fairness via Constraint Injection with FaUCI. Aachen : CEUR-WS.
Enforcing Fairness via Constraint Injection with FaUCI
Matteo Magnini;Giovanni Ciatto;Roberta Calegari;Andrea Omicini
2024
Abstract
The problem of fairness in AI can be tackled by minimising bias in the data (pre-processing), in the algorithms (in-processing), or in the results (post-processing). In the particular case of in-processing applied to supervised machine learning, state-of-the-art solutions rely on a few well-known fairness metrics – e.g., demographic parity, disparate impact, or equalised odds – optimised during training—which, however, mostly focus on binary attributes and their effects on binary classification problems. Accordingly, in this work we propose FaUCI as a general purpose framework for injecting fairness constraints into neural networks (or, any model trained via stochastic gradient descent), supporting attributes of many sorts—there including binary, discrete, or continuous features. To evaluate its effectiveness and efficiency, we test FaUCI against several sorts of features and fairness metrics. Furthermore, we compare FaUCI with state-of-the-art solutions for in-processing, demonstrating its superiority.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.