As AI technologies enter our working lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans at work. One critical requirement for such synergistic human-AI interaction is that the AI systems' behavior be explainable to the humans in the loop. The performance of decision-making by artificial intelligence has exceeded the capability of human beings in many specific domains. In the AI decision-making process, the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results. The need for explainability of intelligent decision-making is becoming urgent and a transparent process can strengthen trust between humans and machines. The As AI technologies enter our working lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans at work. One critical requirement for such synergistic human-AI interaction is that the AI systems' behavior be explainable to the humans in the loop. The performance of decision-making by artificial intelligence has exceeded the capability of human beings in many specific domains. In the AI decision-making process, the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results. The need for explainability of intelligent decision-making is becoming urgent and a transparent process can strengthen trust between humans and machines. The TUPLES project, a three-year Horizon Europe R&I project, aims to bridge this gap by developing AI-based planning and scheduling (P&S) tools using a comprehensive, human-centered approach. TUPLES leverages data-driven and knowledge-based symbolic AI methods to provide scalable, transparent, robust, and secure algorithmic planning and scheduling systems solutions. It adopts a use-case-oriented methodology to ensure practical applicability. Use cases are chosen based on input from industry experts, cutting-edge advances, and manageable risks (e.g., manufacturing, aviation, waste management). The EU guidelines for Trustworthy Artificial Intelligence highlight key requirements such as human agency and oversight, transparency, fairness, societal well-being, and accountability. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a practical self-assessment tool for businesses and organizations to evaluate their AI systems. Existing AI-based P&S tools only partially meet these criteria, so innovative AI development approaches are necessary. We conducted a literature review to explore current research on AI algorithms' transparency and explainability in P&S, aiming to identify metrics and recommendations. The findings highlighted the importance of Explainable AI (XAI) in AI design and implementation. XAI addresses the black box problem by making AI systems explainable, meaningful, and accurate. It uses pre-modeling, in-modeling, and post-modeling explainability techniques, relying on psychological concepts of human explanation and interpretation for a human-centered approach. The review pinpoints specific XAI methods and offered evidence to guide the selection of XAI tools in planning and scheduling.

Assessing the Transparency and Explainability of AI Algorithms in Planning and Scheduling tools: A Review of the Literature / Sofia Morandini, Federico Fraboni, Enzo Balatti, Aranka Hackmann, Hannah Brendel, Gabriele Puzzo, Lucia Volpi, Marco De Angelis, Luca Pietrantoni. - ELETTRONICO. - (2023), pp. 1-11. (Intervento presentato al convegno 10th International Conference on Human Interaction and Emerging Technologies (IHIET 2023) tenutosi a Nizza nel 22-24 agosto 2023) [10.54941/ahfe1004068].

Assessing the Transparency and Explainability of AI Algorithms in Planning and Scheduling tools: A Review of the Literature

Sofia Morandini
Primo
;
Federico Fraboni
Secondo
;
Aranka Hackmann;Hannah Brendel;Gabriele Puzzo;Lucia Volpi;Marco De Angelis
Penultimo
;
Luca Pietrantoni
Ultimo
2023

Abstract

As AI technologies enter our working lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans at work. One critical requirement for such synergistic human-AI interaction is that the AI systems' behavior be explainable to the humans in the loop. The performance of decision-making by artificial intelligence has exceeded the capability of human beings in many specific domains. In the AI decision-making process, the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results. The need for explainability of intelligent decision-making is becoming urgent and a transparent process can strengthen trust between humans and machines. The As AI technologies enter our working lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans at work. One critical requirement for such synergistic human-AI interaction is that the AI systems' behavior be explainable to the humans in the loop. The performance of decision-making by artificial intelligence has exceeded the capability of human beings in many specific domains. In the AI decision-making process, the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results. The need for explainability of intelligent decision-making is becoming urgent and a transparent process can strengthen trust between humans and machines. The TUPLES project, a three-year Horizon Europe R&I project, aims to bridge this gap by developing AI-based planning and scheduling (P&S) tools using a comprehensive, human-centered approach. TUPLES leverages data-driven and knowledge-based symbolic AI methods to provide scalable, transparent, robust, and secure algorithmic planning and scheduling systems solutions. It adopts a use-case-oriented methodology to ensure practical applicability. Use cases are chosen based on input from industry experts, cutting-edge advances, and manageable risks (e.g., manufacturing, aviation, waste management). The EU guidelines for Trustworthy Artificial Intelligence highlight key requirements such as human agency and oversight, transparency, fairness, societal well-being, and accountability. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a practical self-assessment tool for businesses and organizations to evaluate their AI systems. Existing AI-based P&S tools only partially meet these criteria, so innovative AI development approaches are necessary. We conducted a literature review to explore current research on AI algorithms' transparency and explainability in P&S, aiming to identify metrics and recommendations. The findings highlighted the importance of Explainable AI (XAI) in AI design and implementation. XAI addresses the black box problem by making AI systems explainable, meaningful, and accurate. It uses pre-modeling, in-modeling, and post-modeling explainability techniques, relying on psychological concepts of human explanation and interpretation for a human-centered approach. The review pinpoints specific XAI methods and offered evidence to guide the selection of XAI tools in planning and scheduling.
2023
Human Interaction and Emerging Technologies (IHIET 2023), Vol. 111, 2023, 609–619
1
11
Assessing the Transparency and Explainability of AI Algorithms in Planning and Scheduling tools: A Review of the Literature / Sofia Morandini, Federico Fraboni, Enzo Balatti, Aranka Hackmann, Hannah Brendel, Gabriele Puzzo, Lucia Volpi, Marco De Angelis, Luca Pietrantoni. - ELETTRONICO. - (2023), pp. 1-11. (Intervento presentato al convegno 10th International Conference on Human Interaction and Emerging Technologies (IHIET 2023) tenutosi a Nizza nel 22-24 agosto 2023) [10.54941/ahfe1004068].
Sofia Morandini, Federico Fraboni, Enzo Balatti, Aranka Hackmann, Hannah Brendel, Gabriele Puzzo, Lucia Volpi, Marco De Angelis, Luca Pietrantoni
File in questo prodotto:
File Dimensione Formato  
978-1-958651-87-2_65.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Universal – Donazione al Pubblico Dominio (CC0 1.0)
Dimensione 569.78 kB
Formato Adobe PDF
569.78 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/939716
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact