This paper deals with a network of computing agents aiming to solve an online optimization problem in a distributed fashion, i.e., by means of local computation and communication, without any central coordinator. We propose the gradient tracking with adaptive momentum estimation (GTAdam) distributed algorithm, which combines a gradient tracking mechanism with first and second order momentum estimates of the gradient. The algorithm is analyzed in the online setting for strongly convex cost functions with Lipschitz continuous gradients. We provide an upper bound for the dynamic regret given by a term related to the initial conditions and another term related to the temporal variations of the objective functions. Moreover, a linear convergence rate is guaranteed in the static setup. The algorithm is tested on a time-varying classification problem, on a (moving) target localization problem, and in a stochastic optimization setup from image classification. In these numerical experiments from multi-agent learning, GTAdam outperforms state-of-the-art distributed optimization methods.

Guido Carnevale, Francesco Farina, Ivano Notarnicola, Giuseppe Notarstefano (2023). GTAdam: Gradient Tracking With Adaptive Momentum for Distributed Online Optimization. IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 10(3), 1436-1448 [10.1109/TCNS.2022.3232519].

GTAdam: Gradient Tracking With Adaptive Momentum for Distributed Online Optimization

Guido Carnevale
Primo
;
Francesco Farina
Secondo
;
Ivano Notarnicola
Penultimo
;
Giuseppe Notarstefano
Ultimo
2023

Abstract

This paper deals with a network of computing agents aiming to solve an online optimization problem in a distributed fashion, i.e., by means of local computation and communication, without any central coordinator. We propose the gradient tracking with adaptive momentum estimation (GTAdam) distributed algorithm, which combines a gradient tracking mechanism with first and second order momentum estimates of the gradient. The algorithm is analyzed in the online setting for strongly convex cost functions with Lipschitz continuous gradients. We provide an upper bound for the dynamic regret given by a term related to the initial conditions and another term related to the temporal variations of the objective functions. Moreover, a linear convergence rate is guaranteed in the static setup. The algorithm is tested on a time-varying classification problem, on a (moving) target localization problem, and in a stochastic optimization setup from image classification. In these numerical experiments from multi-agent learning, GTAdam outperforms state-of-the-art distributed optimization methods.
2023
Guido Carnevale, Francesco Farina, Ivano Notarnicola, Giuseppe Notarstefano (2023). GTAdam: Gradient Tracking With Adaptive Momentum for Distributed Online Optimization. IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 10(3), 1436-1448 [10.1109/TCNS.2022.3232519].
Guido Carnevale; Francesco Farina; Ivano Notarnicola; Giuseppe Notarstefano
File in questo prodotto:
File Dimensione Formato  
GTAdam_Gradient_Tracking_With_Adaptive_Momentum_for_Distributed_Online_Optimization.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Creative commons
Dimensione 977.23 kB
Formato Adobe PDF
977.23 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/918495
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 6
social impact