This chapter argues for a shift from explainability to empirical validation as the primary criterion for integrating AI into diagnostic imaging. The focus should be on accuracy, validation, and clinical integration, ensuring AI systems are rigorously tested and calibrated to meet real‑world medical needs rather than requiring them to produce explanations that may not meaningfully enhance clinician understanding.
Lalumera, E. (2025). Rethinking Explainability in AI for Diagnostic Imaging. New York : Routledge [10.4324/9781003567622-16].
Rethinking Explainability in AI for Diagnostic Imaging
Elisabetta Lalumera
Primo
2025
Abstract
This chapter argues for a shift from explainability to empirical validation as the primary criterion for integrating AI into diagnostic imaging. The focus should be on accuracy, validation, and clinical integration, ensuring AI systems are rigorously tested and calibrated to meet real‑world medical needs rather than requiring them to produce explanations that may not meaningfully enhance clinician understanding.File in questo prodotto:
Eventuali allegati, non sono esposti
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


