Recent research has shown how Deep Neural Networks trained on historical solution pools can tackle CSPs to some degree, with potential applications in problems with implicit soft and hard constraints. In this paper, we consider a setup where one has offline access to symbolic, incomplete, problem knowledge, which cannot however be employed at search time. We show how such knowledge can be generally treated as a propagator, we devise an approach to distill it in the weights of a network, and we define a simple procedure to extensively exploit even small solution pools. Rather than tackling a real-world application directly, we perform experiments in a controlled setting, i.e. the classical Partial Latin Square completion problem, aimed at identifying patterns, potential advantages, and challenges. Our analysis shows that injecting knowledge at training time can be very beneficial with small solution pools, but may have less reliable effects with large solution pools. Scalability appears as the greatest challenge, as it affects the reliability of the incomplete knowledge and necessitates larger solution pools.

Silvestri M., Lombardi M., Milano M. (2021). Injecting Domain Knowledge in Neural Networks: A Controlled Experiment on a Constrained Problem. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-030-78230-6_17].

Injecting Domain Knowledge in Neural Networks: A Controlled Experiment on a Constrained Problem

Silvestri M.
Software
;
Lombardi M.
Methodology
;
Milano M.
Supervision
2021

Abstract

Recent research has shown how Deep Neural Networks trained on historical solution pools can tackle CSPs to some degree, with potential applications in problems with implicit soft and hard constraints. In this paper, we consider a setup where one has offline access to symbolic, incomplete, problem knowledge, which cannot however be employed at search time. We show how such knowledge can be generally treated as a propagator, we devise an approach to distill it in the weights of a network, and we define a simple procedure to extensively exploit even small solution pools. Rather than tackling a real-world application directly, we perform experiments in a controlled setting, i.e. the classical Partial Latin Square completion problem, aimed at identifying patterns, potential advantages, and challenges. Our analysis shows that injecting knowledge at training time can be very beneficial with small solution pools, but may have less reliable effects with large solution pools. Scalability appears as the greatest challenge, as it affects the reliability of the incomplete knowledge and necessitates larger solution pools.
2021
Integration of Constraint Programming, Artificial Intelligence, and Operations Research
266
282
Silvestri M., Lombardi M., Milano M. (2021). Injecting Domain Knowledge in Neural Networks: A Controlled Experiment on a Constrained Problem. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-030-78230-6_17].
Silvestri M.; Lombardi M.; Milano M.
File in questo prodotto:
File Dimensione Formato  
CPAIOR_2021__Knowledge_injection_in_DNNs.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 1.93 MB
Formato Adobe PDF
1.93 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/861097
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? ND
social impact