Convolutional neural networks (CNNs) have revolutionized motor decoding from electroencephalographic (EEG) signals, showcasing their ability to outperform traditional machine learning, especially for Brain-Computer Interface (BCI) applications. By processing also other recording modalities (e.g., electromyography, EMG) together with EEG signals, motor decoding improved. However, multi-modal algorithms for decoding hand movements are mainly applied to simple movements (e.g., wrist flexion/extension), while their adoption for decoding complex movements (e.g., different grip types) is still under-investigated. In this study, we recorded EEG and EMG signals from 12 participants while they performed a delayed reach-to-grasping task towards one out of four possible objects (a handle, a pin, a card, and a ball), and we addressed multi-modal EEG+EMG decoding with a dual-branch CNN. Each branch of the CNN was based on EEGNet. The performance of the multi-modal approach was compared to mono-modal baselines (based on EEG or EMG only). The multi-modal EEG+EMG pipeline outperformed the EEG-based pipeline during movement initiation, while it outperformed the EMG-based pipeline in motor preparation. Finally, the multi-modal approach was capable of accurately discriminating between grip types widely during the task, especially from movement initiation. Our results further validate multi-modal decoding for potential future BCI applications, aiming at achieving a more natural user experience.
Borra D., Fraternali M., Ravanelli M., Magosso E. (2024). Multi-modal Decoding of Reach-to-Grasping from EEG and EMG via Neural Networks. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-71602-7_15].
Multi-modal Decoding of Reach-to-Grasping from EEG and EMG via Neural Networks
Borra D.
Primo
;Magosso E.Ultimo
2024
Abstract
Convolutional neural networks (CNNs) have revolutionized motor decoding from electroencephalographic (EEG) signals, showcasing their ability to outperform traditional machine learning, especially for Brain-Computer Interface (BCI) applications. By processing also other recording modalities (e.g., electromyography, EMG) together with EEG signals, motor decoding improved. However, multi-modal algorithms for decoding hand movements are mainly applied to simple movements (e.g., wrist flexion/extension), while their adoption for decoding complex movements (e.g., different grip types) is still under-investigated. In this study, we recorded EEG and EMG signals from 12 participants while they performed a delayed reach-to-grasping task towards one out of four possible objects (a handle, a pin, a card, and a ball), and we addressed multi-modal EEG+EMG decoding with a dual-branch CNN. Each branch of the CNN was based on EEGNet. The performance of the multi-modal approach was compared to mono-modal baselines (based on EEG or EMG only). The multi-modal EEG+EMG pipeline outperformed the EEG-based pipeline during movement initiation, while it outperformed the EMG-based pipeline in motor preparation. Finally, the multi-modal approach was capable of accurately discriminating between grip types widely during the task, especially from movement initiation. Our results further validate multi-modal decoding for potential future BCI applications, aiming at achieving a more natural user experience.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.