In this article, we introduce a class of novel distributed algorithms for solving stochastic big-data convex optimization problems over directed graphs. In the addressed set-up, the dimension of the decision variable can be extremely high and the objective function can be nonsmooth. The general algorithm consists of two main steps: a consensus step and an update on a single block of the optimization variable, which is then broadcast to neighbors. Three special instances of the proposed method, involving particular problem structures, are then presented. In the general case, the convergence of a dynamic consensus algorithm over random row stochastic matrices is shown. Then, the convergence of the proposed algorithm to the optimal cost is proven in expected value. Exact convergence is achieved when using diminishing (local) stepsizes, whereas approximate convergence is attained when constant stepsizes are employed. The convergence rate is shown to be sublinear and an explicit rate is provided in the case of constant stepsizes. Finally, the algorithm is tested on a distributed classification problem, first on synthetic data and, then, on a real, high-dimensional, text dataset.

Randomized Block Proximal Methods for Distributed Stochastic Big-Data Optimization / Farina, Francesco; Notarstefano, Giuseppe. - In: IEEE TRANSACTIONS ON AUTOMATIC CONTROL. - ISSN 0018-9286. - STAMPA. - 66:9(2021), pp. 4000-4014. [10.1109/TAC.2020.3027647]

Randomized Block Proximal Methods for Distributed Stochastic Big-Data Optimization

Notarstefano, Giuseppe
2021

Abstract

In this article, we introduce a class of novel distributed algorithms for solving stochastic big-data convex optimization problems over directed graphs. In the addressed set-up, the dimension of the decision variable can be extremely high and the objective function can be nonsmooth. The general algorithm consists of two main steps: a consensus step and an update on a single block of the optimization variable, which is then broadcast to neighbors. Three special instances of the proposed method, involving particular problem structures, are then presented. In the general case, the convergence of a dynamic consensus algorithm over random row stochastic matrices is shown. Then, the convergence of the proposed algorithm to the optimal cost is proven in expected value. Exact convergence is achieved when using diminishing (local) stepsizes, whereas approximate convergence is attained when constant stepsizes are employed. The convergence rate is shown to be sublinear and an explicit rate is provided in the case of constant stepsizes. Finally, the algorithm is tested on a distributed classification problem, first on synthetic data and, then, on a real, high-dimensional, text dataset.
2021
Randomized Block Proximal Methods for Distributed Stochastic Big-Data Optimization / Farina, Francesco; Notarstefano, Giuseppe. - In: IEEE TRANSACTIONS ON AUTOMATIC CONTROL. - ISSN 0018-9286. - STAMPA. - 66:9(2021), pp. 4000-4014. [10.1109/TAC.2020.3027647]
Farina, Francesco; Notarstefano, Giuseppe
File in questo prodotto:
File Dimensione Formato  
paper870913_disclaimer.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 1.5 MB
Formato Adobe PDF
1.5 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/870913
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 4
social impact