Recent advancements in the space domain focus on integrating Artificial Intelligence (AI) solutions into safety-critical systems, necessitating highly dependable and interpretable AI due to the unique challenges of the demanding space environment, including stringent reliability requirements, constrained resources, and limited intervention capabilities, which, in turn, require increased transparency, explainability, and trust. This work provides an overview of the traditional challenges associated with deploying AI in space, emphasizing critical applications where elevated levels of trust are essential. We outline the classical requirements for trustworthy AI, linking them to emerging international legislative frameworks that aim to regulate this complex and evolving field. Furthermore, we consider an onboard anomaly detection use case and propose a novel design approach to provide an explainable AI solution. Our approach is intended to enhance the reliability of anomaly detection by leveraging both advanced AI techniques and conventional components of the onboard Fault Detection, Isolation and Recovery (FDIR) subsystem, while not significantly impacting on the required computational resources. Aligned with the principles of fault tolerance and system dependability, this special session aims to foster trust in autonomous AI technologies by promoting the development of high-performing, transparent, and accountable solutions essential for next-generation electronic systems.

Manovi, L., Gallon, R., Furano, G., Rovatti, R., Mangia, M., Menicucci, A., et al. (2025). Beyond the Black Box: Advancing Transparency, Explainability, and Trust in AI for Space. Institute of Electrical and Electronics Engineers Inc. [10.1109/DFT66274.2025.11257510].

Beyond the Black Box: Advancing Transparency, Explainability, and Trust in AI for Space

Rovatti R.;Mangia M.;
2025

Abstract

Recent advancements in the space domain focus on integrating Artificial Intelligence (AI) solutions into safety-critical systems, necessitating highly dependable and interpretable AI due to the unique challenges of the demanding space environment, including stringent reliability requirements, constrained resources, and limited intervention capabilities, which, in turn, require increased transparency, explainability, and trust. This work provides an overview of the traditional challenges associated with deploying AI in space, emphasizing critical applications where elevated levels of trust are essential. We outline the classical requirements for trustworthy AI, linking them to emerging international legislative frameworks that aim to regulate this complex and evolving field. Furthermore, we consider an onboard anomaly detection use case and propose a novel design approach to provide an explainable AI solution. Our approach is intended to enhance the reliability of anomaly detection by leveraging both advanced AI techniques and conventional components of the onboard Fault Detection, Isolation and Recovery (FDIR) subsystem, while not significantly impacting on the required computational resources. Aligned with the principles of fault tolerance and system dependability, this special session aims to foster trust in autonomous AI technologies by promoting the development of high-performing, transparent, and accountable solutions essential for next-generation electronic systems.
2025
Proceedings - IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems, DFT
1
8
Manovi, L., Gallon, R., Furano, G., Rovatti, R., Mangia, M., Menicucci, A., et al. (2025). Beyond the Black Box: Advancing Transparency, Explainability, and Trust in AI for Space. Institute of Electrical and Electronics Engineers Inc. [10.1109/DFT66274.2025.11257510].
Manovi, L.; Gallon, R.; Furano, G.; Rovatti, R.; Mangia, M.; Menicucci, A.; Schiemenz, F.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1049037
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact