While representing the de-facto framework for enabling distributed training of Machine Learning models, Federated Learning (FL) still suffers convergence issues when non-Independent and Identically Distributed (non-IID) data are considered. In this context, the local model optimisation on different data distributions generate dissimilar updates, which are difficult to aggregate and translate into sub-optimal convergence. To tackle this issues, we propose Peer-Reviewed Federated Learning (PRFL), an extension of the traditional FL training process inspired by the peer-review procedure common in the academic field, where model updates are reviewed by several other clients in the federation before being aggregated at the server-side. PRFL aims at enabling the identification of relevant updates, while disregarding the ineffective ones. We implement PRFL on top of the Flower FL library, and make Peer-Reviewed Flower a publicly-available library for the modular implementation of any review-based FL algorithm. A preliminary case study on both regression and classification tasks highlights the potential of PRFL, showcasing how the distributed solution can achieve performance similar to that obtained by the corresponding centralised algorithm, even when non-IID data are considered.

Peer-Reviewed Federated Learning / Mattia Passeri, Andrea Agiollo, Andrea Omicini. - ELETTRONICO. - 3579:(2023), pp. 49-65. (Intervento presentato al convegno 24th Workshop “From Objects to Agents” (WOA 2023) tenutosi a Roma, Italy nel 6–8 November 2023).

Peer-Reviewed Federated Learning

Andrea Agiollo
;
Andrea Omicini
2023

Abstract

While representing the de-facto framework for enabling distributed training of Machine Learning models, Federated Learning (FL) still suffers convergence issues when non-Independent and Identically Distributed (non-IID) data are considered. In this context, the local model optimisation on different data distributions generate dissimilar updates, which are difficult to aggregate and translate into sub-optimal convergence. To tackle this issues, we propose Peer-Reviewed Federated Learning (PRFL), an extension of the traditional FL training process inspired by the peer-review procedure common in the academic field, where model updates are reviewed by several other clients in the federation before being aggregated at the server-side. PRFL aims at enabling the identification of relevant updates, while disregarding the ineffective ones. We implement PRFL on top of the Flower FL library, and make Peer-Reviewed Flower a publicly-available library for the modular implementation of any review-based FL algorithm. A preliminary case study on both regression and classification tasks highlights the potential of PRFL, showcasing how the distributed solution can achieve performance similar to that obtained by the corresponding centralised algorithm, even when non-IID data are considered.
2023
Proceedings of the 24th Workshop "From Objects to Agents"
49
65
Peer-Reviewed Federated Learning / Mattia Passeri, Andrea Agiollo, Andrea Omicini. - ELETTRONICO. - 3579:(2023), pp. 49-65. (Intervento presentato al convegno 24th Workshop “From Objects to Agents” (WOA 2023) tenutosi a Roma, Italy nel 6–8 November 2023).
Mattia Passeri, Andrea Agiollo, Andrea Omicini
File in questo prodotto:
File Dimensione Formato  
paper4.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 1.34 MB
Formato Adobe PDF
1.34 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/950373
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact