A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation. Various knowledge-extraction algorithms have been presented in the literature so far. Unfortunately, running implementations of most of them are currently either proofs of concept or unavailable. In any case, a unified, coherent software framework supporting them all – as well as their interchange, comparison, and exploitation in arbitrary ML workflows – is currently missing. Accordingly, in this paper we discuss the design of PSyKE, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. Notably, PSyKE targets symbolic knowledge in logic form, allowing the extraction of first-order logic clauses. The extracted knowledge is thus both machine- and human-interpretable, and can be used as a starting point for further symbolic processing—e.g. automated reasoning.

Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments / Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini. - In: INTELLIGENZA ARTIFICIALE. - ISSN 1724-8035. - STAMPA. - 16:1(2022), pp. 27-48. [10.3233/IA-210120]

Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments

Giovanni Ciatto;Roberta Calegari;Andrea Omicini
2022

Abstract

A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation. Various knowledge-extraction algorithms have been presented in the literature so far. Unfortunately, running implementations of most of them are currently either proofs of concept or unavailable. In any case, a unified, coherent software framework supporting them all – as well as their interchange, comparison, and exploitation in arbitrary ML workflows – is currently missing. Accordingly, in this paper we discuss the design of PSyKE, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. Notably, PSyKE targets symbolic knowledge in logic form, allowing the extraction of first-order logic clauses. The extracted knowledge is thus both machine- and human-interpretable, and can be used as a starting point for further symbolic processing—e.g. automated reasoning.
2022
Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments / Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini. - In: INTELLIGENZA ARTIFICIALE. - ISSN 1724-8035. - STAMPA. - 16:1(2022), pp. 27-48. [10.3233/IA-210120]
Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/890822
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 4
social impact