Coordinating a multi-agent system of intelligent situated agents is a traditional research problem, impacted by the challenges posed by the very notion of distributed intelligence. These problems arise from agents acquiring information locally, sharing their knowledge, and acting accordingly in their environment to achieve a common, global goal. These issues are even more evident in large-scale collective adaptive systems, where agent interactions are necessarily proximity-based, thus making the emergence of controlled global collective behaviour harder.In this context, two main approaches have been proposed for creating distributed controllers out of macro-level task/goal descriptions: manual design, in which programmers build the controllers directly, and automatic design, which involves synthesizing programs using machine learning methods. In this paper, we consider a new hybrid approach called Field-Informed reinforcement learning (FIRL). We utilise manually designed computational fields (globally distributed data structures) to manage global agent coordination. Then, using Deep Q-learning in combination with Graph Neural Networks we enable the agents to learn the necessary local behaviour automatically to solve collective tasks, relying on those fields through local perception. We demonstrate the effectiveness of this new approach in simulated use cases where tracking and covering tasks for swarm robotics are successfully solved.

Aguzzi, G., Viroli, M., Esterle, L. (2023). Field-informed Reinforcement Learning of Collective Tasks with Graph Neural Networks. 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA : IEEE COMPUTER SOC [10.1109/acsos58161.2023.00021].

Field-informed Reinforcement Learning of Collective Tasks with Graph Neural Networks

Aguzzi, Gianluca
;
Viroli, Mirko;
2023

Abstract

Coordinating a multi-agent system of intelligent situated agents is a traditional research problem, impacted by the challenges posed by the very notion of distributed intelligence. These problems arise from agents acquiring information locally, sharing their knowledge, and acting accordingly in their environment to achieve a common, global goal. These issues are even more evident in large-scale collective adaptive systems, where agent interactions are necessarily proximity-based, thus making the emergence of controlled global collective behaviour harder.In this context, two main approaches have been proposed for creating distributed controllers out of macro-level task/goal descriptions: manual design, in which programmers build the controllers directly, and automatic design, which involves synthesizing programs using machine learning methods. In this paper, we consider a new hybrid approach called Field-Informed reinforcement learning (FIRL). We utilise manually designed computational fields (globally distributed data structures) to manage global agent coordination. Then, using Deep Q-learning in combination with Graph Neural Networks we enable the agents to learn the necessary local behaviour automatically to solve collective tasks, relying on those fields through local perception. We demonstrate the effectiveness of this new approach in simulated use cases where tracking and covering tasks for swarm robotics are successfully solved.
2023
IEEE International Conference on Autonomic Computing and Self-Organizing Systems, ACSOS 2023
37
46
Aguzzi, G., Viroli, M., Esterle, L. (2023). Field-informed Reinforcement Learning of Collective Tasks with Graph Neural Networks. 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA : IEEE COMPUTER SOC [10.1109/acsos58161.2023.00021].
Aguzzi, Gianluca; Viroli, Mirko; Esterle, Lukas
File in questo prodotto:
File Dimensione Formato  
paper.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 526.97 kB
Formato Adobe PDF
526.97 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/962277
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 3
social impact