This chapter argues for a shift from explainability to empirical validation as the primary criterion for integrating AI into diagnostic imaging. The focus should be on accuracy, validation, and clinical integration, ensuring AI systems are rigorously tested and calibrated to meet real‑world medical needs rather than requiring them to produce explanations that may not meaningfully enhance clinician understanding.
Lalumera, E. (2025). Rethinking Explainability in AI for Diagnostic Imaging. New York : Routledge [10.4324/9781003567622-16].
Rethinking Explainability in AI for Diagnostic Imaging
Elisabetta Lalumera
Primo
2025
Abstract
This chapter argues for a shift from explainability to empirical validation as the primary criterion for integrating AI into diagnostic imaging. The focus should be on accuracy, validation, and clinical integration, ensuring AI systems are rigorously tested and calibrated to meet real‑world medical needs rather than requiring them to produce explanations that may not meaningfully enhance clinician understanding.File in questo prodotto:
| File | Dimensione | Formato | |
|---|---|---|---|
|
Lalumera-xai+explainability-apr+2025.pdf
embargo fino al 02/12/2026
Tipo:
Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza:
Licenza per Accesso Aperto. Creative Commons Attribuzione - Non commerciale - Non opere derivate (CCBYNCND)
Dimensione
332.48 kB
Formato
Adobe PDF
|
332.48 kB | Adobe PDF | Visualizza/Apri Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



