We simulated organisms with an arm terminating with a hand composed by two fingers, a thumb and an index, each composed by two segments, whose behavior was guided by a nervous system simulated through an artificial network. The organisms, which evolved through a genetic algorithm, lived in a bidimensional environment containing four objects, either large or small, either grey or black. In a baseline simulation the organisms had to learn to grasp small objects with a precision grip and large objects with a power grip. In Simulation 1 the organisms learned to perform two tasks: in Task 1 they continued to grasp objects according to their size, in Task 2 they had to decide the objects' color by using a precision or a power grip. Learning occured earlier when the grip required to respond to the object and to decide the color was the same than when it was not, even if object size was irrelevant for the task. The simulation replicates the result of an experiment by Tucker & Ellis (2001) suggesting that seeing objects automatically activates motor information on how to grasp them.

Objects and affordances: An Artificial Life simulation

BORGHI, ANNA MARIA;
2005

Abstract

We simulated organisms with an arm terminating with a hand composed by two fingers, a thumb and an index, each composed by two segments, whose behavior was guided by a nervous system simulated through an artificial network. The organisms, which evolved through a genetic algorithm, lived in a bidimensional environment containing four objects, either large or small, either grey or black. In a baseline simulation the organisms had to learn to grasp small objects with a precision grip and large objects with a power grip. In Simulation 1 the organisms learned to perform two tasks: in Task 1 they continued to grasp objects according to their size, in Task 2 they had to decide the objects' color by using a precision or a power grip. Learning occured earlier when the grip required to respond to the object and to decide the color was the same than when it was not, even if object size was irrelevant for the task. The simulation replicates the result of an experiment by Tucker & Ellis (2001) suggesting that seeing objects automatically activates motor information on how to grasp them.
CogSci2005. Proceedings of the Cognitive Science Society
2212
2217
Tsiotas G.; Borghi A.; Parisi D.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11585/5445
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact