This work explores the integration of ontology-based reasoning and Machine Learning techniques for explainable value classification. By relying on an ontological formalization of moral values as in the Moral Foundations Theory, relying on the DnS Ontology Design Pattern, the sandra neuro-symbolic reasoner is used to infer values (fomalized as descriptions) that are satisfied by a certain sentence. Sentences, alongside their structured representation, are automatically generated using an open-source Large Language Model. The inferred descriptions are used to automatically detect the value associated with a sentence. We show that only relying on the reasoner’s inference results in explainable classification comparable to other more complex approaches. We show that combining the reasoner’s inferences with distributional semantics methods largely outperforms all the baselines, including complex models based on neural network architectures. Finally, we build a visualization tool to explore the potential of theory-based values classification, which is publicly available at http://xmv.geomeaning.com/.

Lazzari, N., De Giorgis, S., Gangemi, A., Presutti, V. (2025). Explainable Moral Values: A Neuro-Symbolic Approach to Value Classification. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-78955-7_20].

Explainable Moral Values: A Neuro-Symbolic Approach to Value Classification

Lazzari N.;De Giorgis S.;Gangemi A.;Presutti V.
2025

Abstract

This work explores the integration of ontology-based reasoning and Machine Learning techniques for explainable value classification. By relying on an ontological formalization of moral values as in the Moral Foundations Theory, relying on the DnS Ontology Design Pattern, the sandra neuro-symbolic reasoner is used to infer values (fomalized as descriptions) that are satisfied by a certain sentence. Sentences, alongside their structured representation, are automatically generated using an open-source Large Language Model. The inferred descriptions are used to automatically detect the value associated with a sentence. We show that only relying on the reasoner’s inference results in explainable classification comparable to other more complex approaches. We show that combining the reasoner’s inferences with distributional semantics methods largely outperforms all the baselines, including complex models based on neural network architectures. Finally, we build a visualization tool to explore the potential of theory-based values classification, which is publicly available at http://xmv.geomeaning.com/.
2025
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
238
255
Lazzari, N., De Giorgis, S., Gangemi, A., Presutti, V. (2025). Explainable Moral Values: A Neuro-Symbolic Approach to Value Classification. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-78955-7_20].
Lazzari, N.; De Giorgis, S.; Gangemi, A.; Presutti, V.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1037190
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact