Recent privacy regulations worldwide, such as the European Union's General Data Protection Regulation (GDPR), enforce the “right to be forgotten”, allowing individuals to withdraw consent for the use of their data and requiring entities to delete it. In Federated Learning (FL), where a global machine learning model is collaboratively trained by sharing model weights and updates derived from participants' local data, this right necessitates the ability to remove the contribution of client devices willing to be forgotten. In this paper, we empirically show that, without explicit and specific methods to remove the contributions of clients who request unlearning, their traces remain detectable long after they detach. To this end, we monitor the accuracy on forget data (the data held by the client requesting unlearning) and the susceptibility to membership inference attacks targeting the forget data across rounds that follow the unlearning request. We compare these metrics with those of a retrained model, a model that has never been exposed to the forget data. Our results show that, without applying explicit unlearning methods, a specific client's contribution gradually diminishes over the course of FL rounds but can still remain noticeable even after 50 rounds.

Mora, A., Bellavista, P. (2025). Is Client Unlearning Really Necessary in Federating Learning? [10.1109/icaiic64266.2025.10920707].

Is Client Unlearning Really Necessary in Federating Learning?

Mora, Alessio;Bellavista, Paolo
2025

Abstract

Recent privacy regulations worldwide, such as the European Union's General Data Protection Regulation (GDPR), enforce the “right to be forgotten”, allowing individuals to withdraw consent for the use of their data and requiring entities to delete it. In Federated Learning (FL), where a global machine learning model is collaboratively trained by sharing model weights and updates derived from participants' local data, this right necessitates the ability to remove the contribution of client devices willing to be forgotten. In this paper, we empirically show that, without explicit and specific methods to remove the contributions of clients who request unlearning, their traces remain detectable long after they detach. To this end, we monitor the accuracy on forget data (the data held by the client requesting unlearning) and the susceptibility to membership inference attacks targeting the forget data across rounds that follow the unlearning request. We compare these metrics with those of a retrained model, a model that has never been exposed to the forget data. Our results show that, without applying explicit unlearning methods, a specific client's contribution gradually diminishes over the course of FL rounds but can still remain noticeable even after 50 rounds.
2025
Proceedings of the International Conference on Artificial Intelligence in Information and Communication, {ICAIIC} 2025
0696
0701
Mora, A., Bellavista, P. (2025). Is Client Unlearning Really Necessary in Federating Learning? [10.1109/icaiic64266.2025.10920707].
Mora, Alessio; Bellavista, Paolo
File in questo prodotto:
File Dimensione Formato  
ICAIIC.pdf

embargo fino al 18/03/2027

Tipo: Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza: Licenza per accesso libero gratuito
Dimensione 580.52 kB
Formato Adobe PDF
580.52 kB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1033943
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact