This chapter argues for a shift from explainability to empirical validation as the primary criterion for integrating AI into diagnostic imaging. The focus should be on accuracy, validation, and clinical integration, ensuring AI systems are rigorously tested and calibrated to meet real‑world medical needs rather than requiring them to produce explanations that may not meaningfully enhance clinician understanding.

Lalumera, E. (2025). Rethinking Explainability in AI for Diagnostic Imaging. New York : Routledge [10.4324/9781003567622-16].

Rethinking Explainability in AI for Diagnostic Imaging

Elisabetta Lalumera
Primo
2025

Abstract

This chapter argues for a shift from explainability to empirical validation as the primary criterion for integrating AI into diagnostic imaging. The focus should be on accuracy, validation, and clinical integration, ensuring AI systems are rigorously tested and calibrated to meet real‑world medical needs rather than requiring them to produce explanations that may not meaningfully enhance clinician understanding.
2025
Digital Development Technology, Ethics and Governance
279
296
Lalumera, E. (2025). Rethinking Explainability in AI for Diagnostic Imaging. New York : Routledge [10.4324/9781003567622-16].
Lalumera, Elisabetta
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1032952
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact