Functional magnetic resonance imaging (fMRI) enables the visualization and analysis of brain activity, offering insights into neural processes underlying various cognitive functions and emotional responses. Decoding models, particularly those based on deep learning, have emerged as powerful tools for predicting cognitive states from fMRI data. However, the complex architectures of deep learning models often pose challenges in understanding their decision-making processes. This study proposes a novel approach combining the fMRINet decoding model with the DeepLIFT interpretability technique to unveil hidden neuronal processes involved in distinguishing high and low emotional arousal states. Our results demonstrate robust classification performance of the fMRINet, exceeding chance levels in both typically developing and Autism Spectrum Disorder populations. Importantly, our interpretability analysis identifies statistically relevant regions of interest (ROIs) important for emotional processing, shedding light on both known and novel neural mechanisms underlying emotional arousal. These findings hold promise for the application of explainable models in clinical settings, facilitating the identification of abnormal patterns without relying on predefined assumptions. While our study lays the groundwork for future investigations into the significance of identified ROIs and their broader applicability across diverse populations, addressing limitations such as temporal information and negative contributions remains crucial for model refinement. Overall, our work contributes to advancements in understanding and interpreting neural data, with implications for personalized medicine and enhanced diagnostic capabilities in fMRI research and its clinical applications.
Agostinho, D., Borra, D., Castelo-Branco, M., Simões, M. (2025). Explainability of fMRI Decoding Models Can Unveil Insights into Neural Mechanisms Related to Emotions. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-73500-4_24].
Explainability of fMRI Decoding Models Can Unveil Insights into Neural Mechanisms Related to Emotions
Borra, DavideSecondo
;
2025
Abstract
Functional magnetic resonance imaging (fMRI) enables the visualization and analysis of brain activity, offering insights into neural processes underlying various cognitive functions and emotional responses. Decoding models, particularly those based on deep learning, have emerged as powerful tools for predicting cognitive states from fMRI data. However, the complex architectures of deep learning models often pose challenges in understanding their decision-making processes. This study proposes a novel approach combining the fMRINet decoding model with the DeepLIFT interpretability technique to unveil hidden neuronal processes involved in distinguishing high and low emotional arousal states. Our results demonstrate robust classification performance of the fMRINet, exceeding chance levels in both typically developing and Autism Spectrum Disorder populations. Importantly, our interpretability analysis identifies statistically relevant regions of interest (ROIs) important for emotional processing, shedding light on both known and novel neural mechanisms underlying emotional arousal. These findings hold promise for the application of explainable models in clinical settings, facilitating the identification of abnormal patterns without relying on predefined assumptions. While our study lays the groundwork for future investigations into the significance of identified ROIs and their broader applicability across diverse populations, addressing limitations such as temporal information and negative contributions remains crucial for model refinement. Overall, our work contributes to advancements in understanding and interpreting neural data, with implications for personalized medicine and enhanced diagnostic capabilities in fMRI research and its clinical applications.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.