Recent privacy regulations worldwide, such as the European Union's General Data Protection Regulation (GDPR), enforce the “right to be forgotten”, allowing individuals to withdraw consent for the use of their data and requiring entities to delete it. In Federated Learning (FL), where a global machine learning model is collaboratively trained by sharing model weights and updates derived from participants' local data, this right necessitates the ability to remove the contribution of client devices willing to be forgotten. In this paper, we empirically show that, without explicit and specific methods to remove the contributions of clients who request unlearning, their traces remain detectable long after they detach. To this end, we monitor the accuracy on forget data (the data held by the client requesting unlearning) and the susceptibility to membership inference attacks targeting the forget data across rounds that follow the unlearning request. We compare these metrics with those of a retrained model, a model that has never been exposed to the forget data. Our results show that, without applying explicit unlearning methods, a specific client's contribution gradually diminishes over the course of FL rounds but can still remain noticeable even after 50 rounds.
Mora, A., Bellavista, P. (2025). Is Client Unlearning Really Necessary in Federating Learning? [10.1109/icaiic64266.2025.10920707].
Is Client Unlearning Really Necessary in Federating Learning?
Mora, Alessio;Bellavista, Paolo
2025
Abstract
Recent privacy regulations worldwide, such as the European Union's General Data Protection Regulation (GDPR), enforce the “right to be forgotten”, allowing individuals to withdraw consent for the use of their data and requiring entities to delete it. In Federated Learning (FL), where a global machine learning model is collaboratively trained by sharing model weights and updates derived from participants' local data, this right necessitates the ability to remove the contribution of client devices willing to be forgotten. In this paper, we empirically show that, without explicit and specific methods to remove the contributions of clients who request unlearning, their traces remain detectable long after they detach. To this end, we monitor the accuracy on forget data (the data held by the client requesting unlearning) and the susceptibility to membership inference attacks targeting the forget data across rounds that follow the unlearning request. We compare these metrics with those of a retrained model, a model that has never been exposed to the forget data. Our results show that, without applying explicit unlearning methods, a specific client's contribution gradually diminishes over the course of FL rounds but can still remain noticeable even after 50 rounds.| File | Dimensione | Formato | |
|---|---|---|---|
|
ICAIIC.pdf
embargo fino al 18/03/2027
Tipo:
Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza:
Licenza per accesso libero gratuito
Dimensione
580.52 kB
Formato
Adobe PDF
|
580.52 kB | Adobe PDF | Visualizza/Apri Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


