FP7-ICT, grant no. 217077 The project started from the idea of investigating the cognitive value of eye movements when one is engaged in active exploration of the peripersonal space. In particular, we argued that, to interact effectively with the environment, humans use complex motion strategies at ocular level (but possibly extended to other body parts, e.g., head and arms, so possibly using multimodal feedback), to extract information useful to build representations of the 3D space which are coherent and stable with respect to time. All the EYESHOTS’ processing modules build on distributed representations in which sensorial and motor aspects coexist explicitly or implicitly. The models resort to a hierarchy of learning stages at different levels of abstraction, ranging from the coordination of binocular eye movements (e.g., learning disparity-vergence servos), to the definition of contingent saliency maps (e.g., learning of object detection properties), up to the development of the sensorimotor representation for bidirectional eye-arm coordination. On our opinion, this can be considered, an interesting methodological result of the project. Through the distributed coding, indeed, it is possible to avoid a sequentialization of sensorial and motor processes, that is certainly desirable for the development of cognitive abilities at a pre-interpretative (i.e., sub-symbolic) level, e.g., when a system must learn binocular eye coordination, handling the inaccuracies of the motor system, and actively measure the space around it.

Heterogeneous 3-D Perception across Visual Fragments-EYESHOTS

FATTORI, PATRIZIA;GALLETTI, CLAUDIO;GAMBERINI, MICHELA;BREVEGLIERI, ROSSELLA;PASSARELLI, LAURETTA;BOSCO, ANNALISA;PLACENTI, GIACOMO
2011

Abstract

FP7-ICT, grant no. 217077 The project started from the idea of investigating the cognitive value of eye movements when one is engaged in active exploration of the peripersonal space. In particular, we argued that, to interact effectively with the environment, humans use complex motion strategies at ocular level (but possibly extended to other body parts, e.g., head and arms, so possibly using multimodal feedback), to extract information useful to build representations of the 3D space which are coherent and stable with respect to time. All the EYESHOTS’ processing modules build on distributed representations in which sensorial and motor aspects coexist explicitly or implicitly. The models resort to a hierarchy of learning stages at different levels of abstraction, ranging from the coordination of binocular eye movements (e.g., learning disparity-vergence servos), to the definition of contingent saliency maps (e.g., learning of object detection properties), up to the development of the sensorimotor representation for bidirectional eye-arm coordination. On our opinion, this can be considered, an interesting methodological result of the project. Through the distributed coding, indeed, it is possible to avoid a sequentialization of sensorial and motor processes, that is certainly desirable for the development of cognitive abilities at a pre-interpretative (i.e., sub-symbolic) level, e.g., when a system must learn binocular eye coordination, handling the inaccuracies of the motor system, and actively measure the space around it.
2011
2008
P. Fattori; C. Galletti; M. Gamberini; R. Breveglieri; L. Passarelli; A. Bosco; G. Placenti
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/149742
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact