The present work investigates how complex semantics can be extracted from the statistics of input features, using an attractor neural network. The study is focused on how feature dominance and feature distinctiveness can be naturally coded using Hebbian training, and how similarity among objects can be managed. The model includes a lexical network (which represents word-forms) and a semantic network composed of several areas: each area is topologically organized (similarity) and codes for a different feature. Synapses in the model are created using Hebb rules with different values for pre-synaptic and post-synaptic thresholds, producing patterns of asymmetrical synapses. This work uses a simple taxonomy of schematic objects (i.e., a vector of features), with shared features (to realize categories) and distinctive features (to have individual members) with different frequency of occurrence.The trained network can solve simple object recognition tasks and object naming tasks by maintaining a distinction between categories and their members, and providing a different role for dominant features vs. marginal features. Marginal features are not evoked in memory when thinking of objects, but they facilitate the reconstruction of objects when provided as input. Finally, the topological organization of features allows the recognition of objects with some modified features.

Ursino, M., Cuppini, C., Magosso, E. (2015). A neural network for learning the meaning of objects and words from a featural representation. NEURAL NETWORKS, 63, 234-253 [10.1016/j.neunet.2014.11.009].

A neural network for learning the meaning of objects and words from a featural representation

URSINO, MAURO;CUPPINI, CRISTIANO;MAGOSSO, ELISA
2015

Abstract

The present work investigates how complex semantics can be extracted from the statistics of input features, using an attractor neural network. The study is focused on how feature dominance and feature distinctiveness can be naturally coded using Hebbian training, and how similarity among objects can be managed. The model includes a lexical network (which represents word-forms) and a semantic network composed of several areas: each area is topologically organized (similarity) and codes for a different feature. Synapses in the model are created using Hebb rules with different values for pre-synaptic and post-synaptic thresholds, producing patterns of asymmetrical synapses. This work uses a simple taxonomy of schematic objects (i.e., a vector of features), with shared features (to realize categories) and distinctive features (to have individual members) with different frequency of occurrence.The trained network can solve simple object recognition tasks and object naming tasks by maintaining a distinction between categories and their members, and providing a different role for dominant features vs. marginal features. Marginal features are not evoked in memory when thinking of objects, but they facilitate the reconstruction of objects when provided as input. Finally, the topological organization of features allows the recognition of objects with some modified features.
2015
Ursino, M., Cuppini, C., Magosso, E. (2015). A neural network for learning the meaning of objects and words from a featural representation. NEURAL NETWORKS, 63, 234-253 [10.1016/j.neunet.2014.11.009].
Ursino, Mauro; Cuppini, Cristiano; Magosso, Elisa
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/522975
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 5
  • Scopus 20
  • ???jsp.display-item.citation.isi??? 17
social impact