Neural decoding widely exploits machine learning for classifying electroencephalographic (EEG) signals for brain-computer interface applications. Recent advancements in neural decoding regards the use of brain functional connectivity estimates as input features and the adoption of convolutional neural networks (CNNs) to realize decoders. Moreover, explainable artificial intelligence (XAI) approaches based on CNNs are growing interest in the neuroscience community, for validating the knowledge learned by networks and for using the decoder not only to classify the EEG but also to analyze it in a data-driven way, without a priori assumptions. However, the adoption of connectivity estimates for neural decoding is still in its infancy, as adopts non-directed connectivity measures, limits the analysis of few interactions/frequency ranges, and exploits classic machine learning approaches without exploring CNNs. Moreover, XAI approaches have never been applied to analyze EEG-based functional connectivity. To overcome these limitations, we design and apply a CNN for processing directed connectivity measures estimated via spectral Granger causality. The CNN automatically learns features in the frequency and spatial domains, and it is coupled with an explanation technique (DeepLIFT) for highlighting the most relevant connectivity inflow and outflow associated to each decoded brain state. Our approach is applied to motor imagery decoding, and achieves state-of-the-art performance compared to existing networks. DeepLIFT relevance representations match the directional interactions known occurring when imagining movements, validating the features related to the brain network, as learned by the CNN.
Borra D., Ravanelli M. (2024). Explaining Network Decision Provides Insights on the Causal Interaction Between Brain Regions in a Motor Imagery Task. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-71602-7_14].
Explaining Network Decision Provides Insights on the Causal Interaction Between Brain Regions in a Motor Imagery Task
Borra D.
Primo
;
2024
Abstract
Neural decoding widely exploits machine learning for classifying electroencephalographic (EEG) signals for brain-computer interface applications. Recent advancements in neural decoding regards the use of brain functional connectivity estimates as input features and the adoption of convolutional neural networks (CNNs) to realize decoders. Moreover, explainable artificial intelligence (XAI) approaches based on CNNs are growing interest in the neuroscience community, for validating the knowledge learned by networks and for using the decoder not only to classify the EEG but also to analyze it in a data-driven way, without a priori assumptions. However, the adoption of connectivity estimates for neural decoding is still in its infancy, as adopts non-directed connectivity measures, limits the analysis of few interactions/frequency ranges, and exploits classic machine learning approaches without exploring CNNs. Moreover, XAI approaches have never been applied to analyze EEG-based functional connectivity. To overcome these limitations, we design and apply a CNN for processing directed connectivity measures estimated via spectral Granger causality. The CNN automatically learns features in the frequency and spatial domains, and it is coupled with an explanation technique (DeepLIFT) for highlighting the most relevant connectivity inflow and outflow associated to each decoded brain state. Our approach is applied to motor imagery decoding, and achieves state-of-the-art performance compared to existing networks. DeepLIFT relevance representations match the directional interactions known occurring when imagining movements, validating the features related to the brain network, as learned by the CNN.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.