Vision and touch both support spatial information processing. These sensory systems also exhibit highly specific interactions in spatial perception, which may reflect multisensory representations that are learned through visuotactile (VT) experiences. Recently, Wani and colleagues reported that taskirrelevant visual cues bias tactile perception, in a brightness-dependent manner, on a task requiring participants to detect unimanual and bimanual cues. Importantly, tactile performance remained spatially biased after VT exposure, even when no visual cues were presented. These effects on bimanual touch conceivably reflect cross-modal learning, but the neural substrates that are changed by VT experience are unclear. We previously described a neural network capable of simulating VT spatial interactions. Here, we exploited this model to test different hypotheses regarding potential network-level changes that may underlie the VT learning effects. Simulation results indicated that VT learning effects are inconsistent with plasticity restricted to unisensory visual and tactile hand representations. Similarly, VT learning effects were also inconsistent with changes restricted to the strength of inter-hemispheric inhibitory interactions. Instead, we found that both the hand representations and the inter-hemispheric inhibitory interactions need to be plastic to fully recapitulate VT learning effects. Our results imply that crossmodal learning of bimanual spatial perception involves multiple changes distributed over a VT processing cortical network.

A neurocomputational analysis of visual bias on bimanual tactile spatial perception during a crossmodal exposure

Cuppini, Cristiano
Primo
;
Magosso, Elisa;Monti, Melissa;Ursino, Mauro;
2022

Abstract

Vision and touch both support spatial information processing. These sensory systems also exhibit highly specific interactions in spatial perception, which may reflect multisensory representations that are learned through visuotactile (VT) experiences. Recently, Wani and colleagues reported that taskirrelevant visual cues bias tactile perception, in a brightness-dependent manner, on a task requiring participants to detect unimanual and bimanual cues. Importantly, tactile performance remained spatially biased after VT exposure, even when no visual cues were presented. These effects on bimanual touch conceivably reflect cross-modal learning, but the neural substrates that are changed by VT experience are unclear. We previously described a neural network capable of simulating VT spatial interactions. Here, we exploited this model to test different hypotheses regarding potential network-level changes that may underlie the VT learning effects. Simulation results indicated that VT learning effects are inconsistent with plasticity restricted to unisensory visual and tactile hand representations. Similarly, VT learning effects were also inconsistent with changes restricted to the strength of inter-hemispheric inhibitory interactions. Instead, we found that both the hand representations and the inter-hemispheric inhibitory interactions need to be plastic to fully recapitulate VT learning effects. Our results imply that crossmodal learning of bimanual spatial perception involves multiple changes distributed over a VT processing cortical network.
2022
Cuppini, Cristiano; Magosso, Elisa; Monti, Melissa; Ursino, Mauro; Yau, Jeffrey M.
File in questo prodotto:
File Dimensione Formato  
Cuppini et al. - 2022.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 7.14 MB
Formato Adobe PDF
7.14 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/901329
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact