In the era of digital revolution, individual lives are going to cross and interconnect ubiquitous online domains and offline reality based on smart technologies—discovering, storing, processing, learning, analysing, and predicting from huge amounts of environment-collected data. Sub-symbolic techniques, such as deep learning, play a key role there, yet they are often built as black boxes, which are not inspectable, interpretable, explainable. New research efforts towards explainable artificial intelligence (XAI) are trying to address those issues, with the final purpose of building understandable, accountable, and trustable AI systems—still, seemingly with a long way to go. Generally speaking, while we fully understand and appreciate the power of sub-symbolic approaches, we believe that symbolic approaches to machine intelligence, once properly combined with sub-symbolic ones, have a critical role to play in order to achieve key properties of XAI such as observability, interpretability, explainability, accountability, and trustability. In this paper we describe an example of integration of symbolic and sub-symbolic techniques. First, we sketch a general framework where symbolic and sub-symbolic approaches could fruitfully combine to produce intelligent behaviour in AI applications. Then, we focus in particular on the goal of building a narrative explanation for ML predictors: to this end, we exploit the logical knowledge obtained translating decision tree predictors into logical programs.

Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI / Roberta Calegari, Giovanni Ciatto, Jason Dellaluce, Andrea Omicini. - ELETTRONICO. - 2404:(2019), pp. 105-112. (Intervento presentato al convegno 20th Workshop “From Objects to Agents” (WOA 2019) tenutosi a Parma, Italy nel 26-28 giugno 2019).

Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI

Roberta Calegari;Giovanni Ciatto;Andrea Omicini
2019

Abstract

In the era of digital revolution, individual lives are going to cross and interconnect ubiquitous online domains and offline reality based on smart technologies—discovering, storing, processing, learning, analysing, and predicting from huge amounts of environment-collected data. Sub-symbolic techniques, such as deep learning, play a key role there, yet they are often built as black boxes, which are not inspectable, interpretable, explainable. New research efforts towards explainable artificial intelligence (XAI) are trying to address those issues, with the final purpose of building understandable, accountable, and trustable AI systems—still, seemingly with a long way to go. Generally speaking, while we fully understand and appreciate the power of sub-symbolic approaches, we believe that symbolic approaches to machine intelligence, once properly combined with sub-symbolic ones, have a critical role to play in order to achieve key properties of XAI such as observability, interpretability, explainability, accountability, and trustability. In this paper we describe an example of integration of symbolic and sub-symbolic techniques. First, we sketch a general framework where symbolic and sub-symbolic approaches could fruitfully combine to produce intelligent behaviour in AI applications. Then, we focus in particular on the goal of building a narrative explanation for ML predictors: to this end, we exploit the logical knowledge obtained translating decision tree predictors into logical programs.
2019
WOA 2019 -- 20th Workshop "From Objects to Agents''
105
112
Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI / Roberta Calegari, Giovanni Ciatto, Jason Dellaluce, Andrea Omicini. - ELETTRONICO. - 2404:(2019), pp. 105-112. (Intervento presentato al convegno 20th Workshop “From Objects to Agents” (WOA 2019) tenutosi a Parma, Italy nel 26-28 giugno 2019).
Roberta Calegari, Giovanni Ciatto, Jason Dellaluce, Andrea Omicini
File in questo prodotto:
File Dimensione Formato  
paper16.pdf

accesso aperto

Descrizione: PDF editoriale
Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 1.15 MB
Formato Adobe PDF
1.15 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/692870
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 16
  • ???jsp.display-item.citation.isi??? ND
social impact