This paper introduces a novel distributed algorithm over static directed graphs for solving big data convex optimization problems in which the dimension of the decision variable can be extremely high and the objective function can be nonsmooth. In the proposed algorithm nodes in the network update and communicate only blocks of their current solution estimate rather than the entire vector. The algorithm consists of two main steps: a consensus step and a subgradient update on a single block of the optimization variable (which is then broadcast to neighbors). Agents are shown to asymptotically achieve consensus by studying a block-wise consensus protocol over random graphs. Then convergence to the optimal cost is proven in expected value by exploiting the consensus of agents estimates and randomness of the algorithm. Finally, as a numerical example, a distributed linear classification problem is solved by means of the proposed algorithm.

A Randomized Block Subgradient Approach to Distributed Big Data Optimization / Francesco Farina, Giuseppe Notarstefano. - ELETTRONICO. - (2019), pp. 6362-6367. (Intervento presentato al convegno 58th Conference on Decision and Control (CDC) tenutosi a Nice, France nel December 11-13, 2019) [10.1109/CDC40024.2019.9030156].

A Randomized Block Subgradient Approach to Distributed Big Data Optimization

Abstract

This paper introduces a novel distributed algorithm over static directed graphs for solving big data convex optimization problems in which the dimension of the decision variable can be extremely high and the objective function can be nonsmooth. In the proposed algorithm nodes in the network update and communicate only blocks of their current solution estimate rather than the entire vector. The algorithm consists of two main steps: a consensus step and a subgradient update on a single block of the optimization variable (which is then broadcast to neighbors). Agents are shown to asymptotically achieve consensus by studying a block-wise consensus protocol over random graphs. Then convergence to the optimal cost is proven in expected value by exploiting the consensus of agents estimates and randomness of the algorithm. Finally, as a numerical example, a distributed linear classification problem is solved by means of the proposed algorithm.
Scheda breve Scheda completa Scheda completa (DC)
2019
2019 IEEE 58th Conference on Decision and Control (CDC)
6362
6367
A Randomized Block Subgradient Approach to Distributed Big Data Optimization / Francesco Farina, Giuseppe Notarstefano. - ELETTRONICO. - (2019), pp. 6362-6367. (Intervento presentato al convegno 58th Conference on Decision and Control (CDC) tenutosi a Nice, France nel December 11-13, 2019) [10.1109/CDC40024.2019.9030156].
Francesco Farina, Giuseppe Notarstefano
File in questo prodotto:
File
main_disclaimer_729847.pdf

Open Access dal 13/09/2020

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 727.52 kB
Utilizza questo identificativo per citare o creare un link a questo documento: `https://hdl.handle.net/11585/729847`