This paper presents a fully asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex. In the considered network setting each node is active upon triggering of a local timer and has access only to a portion of the objective function and to a subset of the constraints. In the proposed technique, based on the method of multipliers, each node performs, when it wakes up, either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. Nodes realize when to switch from the descent step to the ascent one through an asynchronous distributed logic-AND, which detects when all the nodes have reached a predefined tolerance in the minimization of the augmented Lagrangian. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian. This allows one to extend the properties of the centralized method of multipliers to the considered distributed framework. Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of a neural network.

A distributed asynchronous method of multipliers for constrained nonconvex optimization / Farina, Francesco; Garulli, Andrea; Giannitrapani, Antonio; Notarstefano, Giuseppe. - In: AUTOMATICA. - ISSN 0005-1098. - STAMPA. - 103:(2019), pp. 243-253. [10.1016/j.automatica.2019.02.003]

A distributed asynchronous method of multipliers for constrained nonconvex optimization

Farina, Francesco;Notarstefano, Giuseppe
2019

Abstract

This paper presents a fully asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex. In the considered network setting each node is active upon triggering of a local timer and has access only to a portion of the objective function and to a subset of the constraints. In the proposed technique, based on the method of multipliers, each node performs, when it wakes up, either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. Nodes realize when to switch from the descent step to the ascent one through an asynchronous distributed logic-AND, which detects when all the nodes have reached a predefined tolerance in the minimization of the augmented Lagrangian. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian. This allows one to extend the properties of the centralized method of multipliers to the considered distributed framework. Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of a neural network.
2019
A distributed asynchronous method of multipliers for constrained nonconvex optimization / Farina, Francesco; Garulli, Andrea; Giannitrapani, Antonio; Notarstefano, Giuseppe. - In: AUTOMATICA. - ISSN 0005-1098. - STAMPA. - 103:(2019), pp. 243-253. [10.1016/j.automatica.2019.02.003]
Farina, Francesco; Garulli, Andrea; Giannitrapani, Antonio; Notarstefano, Giuseppe
File in questo prodotto:
File Dimensione Formato  
main_final_disclaimer_671909.pdf

Open Access dal 23/02/2021

Descrizione: pdf post print
Tipo: Postprint
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione - Non commerciale - Non opere derivate (CCBYNCND)
Dimensione 1.1 MB
Formato Adobe PDF
1.1 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/671909
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 17
social impact