Machine Learning (ML) models are very effective in many learning tasks, due to the capability to extract meaningful information from large data sets. Nevertheless, there are learning problems that cannot be easily solved relying on pure data, e.g. scarce data or very complex functions to be approximated. Fortunately, in many contexts domain knowledge is explicitly available and can be used to train better ML models. This paper studies the improvements that can be obtained by integrating prior knowledge when dealing with a context-specific, non-trivial learning task, namely precision tuning of transprecision computing applications. The domain information is injected in the ML models in different ways: I) additional features, II) ad-hoc graph-based network topology, III) regularization schemes. The results clearly show that ML models exploiting problem-specific information outperform the data-driven ones, with an average improvement around 38%.

Injective Domain Knowledge in Neural Networks for Transprecision Computing / Andrea Borghesi, Federico Baldo, Michele Lombardi, Michela Milano. - ELETTRONICO. - 12565:(2020), pp. 587-600. (Intervento presentato al convegno The Sixth International Conference on Machine Learning, Optimization, and Data Science tenutosi a Siena nel July 19-23, 2020) [10.1007/978-3-030-64583-0_52].

Injective Domain Knowledge in Neural Networks for Transprecision Computing

Andrea Borghesi;Federico Baldo;Michele Lombardi;Michela Milano
2020

Abstract

Machine Learning (ML) models are very effective in many learning tasks, due to the capability to extract meaningful information from large data sets. Nevertheless, there are learning problems that cannot be easily solved relying on pure data, e.g. scarce data or very complex functions to be approximated. Fortunately, in many contexts domain knowledge is explicitly available and can be used to train better ML models. This paper studies the improvements that can be obtained by integrating prior knowledge when dealing with a context-specific, non-trivial learning task, namely precision tuning of transprecision computing applications. The domain information is injected in the ML models in different ways: I) additional features, II) ad-hoc graph-based network topology, III) regularization schemes. The results clearly show that ML models exploiting problem-specific information outperform the data-driven ones, with an average improvement around 38%.
2020
Machine Learning, Optimization, and Data Science. LOD 2020
587
600
Injective Domain Knowledge in Neural Networks for Transprecision Computing / Andrea Borghesi, Federico Baldo, Michele Lombardi, Michela Milano. - ELETTRONICO. - 12565:(2020), pp. 587-600. (Intervento presentato al convegno The Sixth International Conference on Machine Learning, Optimization, and Data Science tenutosi a Siena nel July 19-23, 2020) [10.1007/978-3-030-64583-0_52].
Andrea Borghesi, Federico Baldo, Michele Lombardi, Michela Milano
File in questo prodotto:
File Dimensione Formato  
main.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 473.32 kB
Formato Adobe PDF
473.32 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/801890
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact