In this paper, we propose MAcroscopic Consensus and micRoscopic gradient-based OPTimization (MACROPT), a novel distributed method for a network of agents able to learn a probabilistic macroscopic model and concurrently optimize it by acting on the microscopic agents’ states. The macroscopic model is defined through the aggregation of local kernels each representing a probabilistic feature of a single agent (e.g., its local sensing model), while the optimization is done with respect to a given cost index, e.g., the Kullback–Leibler divergence with respect to a target distribution. MACROPT improves the macroscopic model by microscopically coordinating the agents according to a distributed gradient-based policy. Concurrently, it allows each agent to locally learn the macroscopic model through a consensus-based mechanism. We analyze the resulting interconnected method through the lens of system theory. We demonstrate that MACROPT asymptotically converges to the set of stationary points of the nonconvex cost function. The theoretical findings are supported by numerical simulations in sensor network event-detection scenarios.

Brumali, R., Carnevale, G., Notarstefano, G. (2025). Distributed learning and optimization of a multi-agent macroscopic probabilistic model. EUROPEAN JOURNAL OF CONTROL, 86(Part A), 1-7 [10.1016/j.ejcon.2025.101332].

Distributed learning and optimization of a multi-agent macroscopic probabilistic model

Brumali, Riccardo
Primo
;
Carnevale, Guido
Secondo
;
Notarstefano, Giuseppe
Ultimo
2025

Abstract

In this paper, we propose MAcroscopic Consensus and micRoscopic gradient-based OPTimization (MACROPT), a novel distributed method for a network of agents able to learn a probabilistic macroscopic model and concurrently optimize it by acting on the microscopic agents’ states. The macroscopic model is defined through the aggregation of local kernels each representing a probabilistic feature of a single agent (e.g., its local sensing model), while the optimization is done with respect to a given cost index, e.g., the Kullback–Leibler divergence with respect to a target distribution. MACROPT improves the macroscopic model by microscopically coordinating the agents according to a distributed gradient-based policy. Concurrently, it allows each agent to locally learn the macroscopic model through a consensus-based mechanism. We analyze the resulting interconnected method through the lens of system theory. We demonstrate that MACROPT asymptotically converges to the set of stationary points of the nonconvex cost function. The theoretical findings are supported by numerical simulations in sensor network event-detection scenarios.
2025
Brumali, R., Carnevale, G., Notarstefano, G. (2025). Distributed learning and optimization of a multi-agent macroscopic probabilistic model. EUROPEAN JOURNAL OF CONTROL, 86(Part A), 1-7 [10.1016/j.ejcon.2025.101332].
Brumali, Riccardo; Carnevale, Guido; Notarstefano, Giuseppe
File in questo prodotto:
File Dimensione Formato  
Final_sbm_ejc_distributed_macroscopic_optimization.pdf

embargo fino al 29/07/2026

Tipo: Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione - Non commerciale - Non opere derivate (CCBYNCND)
Dimensione 461.76 kB
Formato Adobe PDF
461.76 kB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1025498
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact