Machine learning black boxes, as deep neural networks, are often hard to explain because their predictions depend on complicated relationships involving a huge amount of internal parameters and input features. This opaqueness from the human perspective makes their predictions not trustable, especially in critical applications. In this paper we tackle this issue by introducing the design and implementation of CReEPy, an algorithm performing symbolic knowledge extraction based on explainable clustering. In particular, CReEPy relies on the underlying clustering performed by the ExACT or CREAM procedures to provide human-interpretable Prolog rules mimicking the behaviour of the opaque model. Experiments to assess both the human readability and the predictive performance of the proposed algorithm are discussed here, using existing state-of-the-art techniques as benchmarks for the comparison.

Unveiling Opaque Predictors via Explainable Clustering: The CReEPy Algorithm / Sabbatini F.; Calegari R.. - ELETTRONICO. - 3615:(2023), pp. 1-14. (Intervento presentato al convegno 2nd Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming, BEWARE 2023 tenutosi a Rome, Italy nel 2023).

Unveiling Opaque Predictors via Explainable Clustering: The CReEPy Algorithm

Calegari R.
2023

Abstract

Machine learning black boxes, as deep neural networks, are often hard to explain because their predictions depend on complicated relationships involving a huge amount of internal parameters and input features. This opaqueness from the human perspective makes their predictions not trustable, especially in critical applications. In this paper we tackle this issue by introducing the design and implementation of CReEPy, an algorithm performing symbolic knowledge extraction based on explainable clustering. In particular, CReEPy relies on the underlying clustering performed by the ExACT or CREAM procedures to provide human-interpretable Prolog rules mimicking the behaviour of the opaque model. Experiments to assess both the human readability and the predictive performance of the proposed algorithm are discussed here, using existing state-of-the-art techniques as benchmarks for the comparison.
2023
CEUR Workshop Proceedings
1
14
Unveiling Opaque Predictors via Explainable Clustering: The CReEPy Algorithm / Sabbatini F.; Calegari R.. - ELETTRONICO. - 3615:(2023), pp. 1-14. (Intervento presentato al convegno 2nd Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming, BEWARE 2023 tenutosi a Rome, Italy nel 2023).
Sabbatini F.; Calegari R.
File in questo prodotto:
File Dimensione Formato  
paper1.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 2.51 MB
Formato Adobe PDF
2.51 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/962357
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact