Despite the extent of recent advances in Machine Learning (ML) and Neural Networks, providing formal guarantees on the behavior of these systems is still an open problem, and a crucial requirement for their adoption in regulated or safety-critical scenarios. We consider the task of training differentiable ML models guaranteed to satisfy designer-chosen properties, stated as input-output implications. This is very challenging, due to the computational complexity of rigorously verifying and enforcing compliance in deep neural models. We provide an innovative approach based on: 1) a general, simple architecture enabling efficient verification with a conservative semantic; 2) a rigorous training algorithm based on the Projected Gradient Method; 3) a formulation of the problem of searching for strong counterexamples. The proposed framework, being only marginally affected by model complexity, scales well to practical applications, and produces models that provide full property satisfaction guarantees. We evaluate our approach on properties defined by linear inequalities in regression, and on mutually exclusive classes in multilabel classification. Our approach is competitive with a baseline that includes property enforcement in preprocessing (on training data) and postprocessing (on model predictions). Finally, our contributions establish a framework that opens up multiple research directions and potential improvements.

Francobaldi, M., Lombardi, M. (2025). SMLE: Safe Machine Learning via Embedded Overapproximation. Association for the Advancement of Artificial Intelligence [10.1609/aaai.v39i26.34938].

SMLE: Safe Machine Learning via Embedded Overapproximation

Francobaldi, Matteo;Lombardi, Michele
2025

Abstract

Despite the extent of recent advances in Machine Learning (ML) and Neural Networks, providing formal guarantees on the behavior of these systems is still an open problem, and a crucial requirement for their adoption in regulated or safety-critical scenarios. We consider the task of training differentiable ML models guaranteed to satisfy designer-chosen properties, stated as input-output implications. This is very challenging, due to the computational complexity of rigorously verifying and enforcing compliance in deep neural models. We provide an innovative approach based on: 1) a general, simple architecture enabling efficient verification with a conservative semantic; 2) a rigorous training algorithm based on the Projected Gradient Method; 3) a formulation of the problem of searching for strong counterexamples. The proposed framework, being only marginally affected by model complexity, scales well to practical applications, and produces models that provide full property satisfaction guarantees. We evaluate our approach on properties defined by linear inequalities in regression, and on mutually exclusive classes in multilabel classification. Our approach is competitive with a baseline that includes property enforcement in preprocessing (on training data) and postprocessing (on model predictions). Finally, our contributions establish a framework that opens up multiple research directions and potential improvements.
2025
Proceedings of the AAAI Conference on Artificial Intelligence
27286
27294
Francobaldi, M., Lombardi, M. (2025). SMLE: Safe Machine Learning via Embedded Overapproximation. Association for the Advancement of Artificial Intelligence [10.1609/aaai.v39i26.34938].
Francobaldi, Matteo; Lombardi, Michele
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1027463
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact