In healthcare scenarios, privacy poses significant challenges due to the sensitivity of patient data. Federated Learning (FL) has emerged as a promising solution to unlock their potential while maintaining compliance with privacy-preserving regulations. It enables data contributors to train a global model without sharing raw data. However, FL introduces complexities in complying with the right to be forgotten, a fundamental principle of the European General Data Protection Regulation (GDPR). This right ensures clients can request the removal of their influence from the global model. Unfortunately, the intrinsic decentralized nature of FL makes retraining the model from scratch and Machine Unlearning (MU) methods unfeasible. This challenge has led to Federated Unlearning (FU), which aims to efficiently remove a client's influence through post-processing the global model. FU ensures the unlearned model performs as if the forgotten data were never seen while minimizing performance degradation on other data. As unlearning strategies typically require multiple rounds to restore model performance on retained data, this paper investigates the natural attenuation of a client's contributions over time without FU algorithms. We use the ProstateMRI dataset, a real-world federated healthcare dataset that naturally exhibits feature heterogeneity across parties. We evaluate metrics such as loss, accuracy, and Membership Inference Attacks (MIAs). Our findings highlight the necessity of FU methods to ensure compliance with privacy regulations and effectively erase client contributions from the global model. Code available at: https://github.com/alessiomora/medical_federated_unlearning.

Mora, A., Mazzocca, C., Montanari, R., Bellavista, P. (2025). Federated Unlearning in Healthcare: Why It Matters. Institute of Electrical and Electronics Engineers Inc. [10.1109/ijcnn64981.2025.11228665].

Federated Unlearning in Healthcare: Why It Matters

Mora, Alessio;Montanari, Rebecca;Bellavista, Paolo
2025

Abstract

In healthcare scenarios, privacy poses significant challenges due to the sensitivity of patient data. Federated Learning (FL) has emerged as a promising solution to unlock their potential while maintaining compliance with privacy-preserving regulations. It enables data contributors to train a global model without sharing raw data. However, FL introduces complexities in complying with the right to be forgotten, a fundamental principle of the European General Data Protection Regulation (GDPR). This right ensures clients can request the removal of their influence from the global model. Unfortunately, the intrinsic decentralized nature of FL makes retraining the model from scratch and Machine Unlearning (MU) methods unfeasible. This challenge has led to Federated Unlearning (FU), which aims to efficiently remove a client's influence through post-processing the global model. FU ensures the unlearned model performs as if the forgotten data were never seen while minimizing performance degradation on other data. As unlearning strategies typically require multiple rounds to restore model performance on retained data, this paper investigates the natural attenuation of a client's contributions over time without FU algorithms. We use the ProstateMRI dataset, a real-world federated healthcare dataset that naturally exhibits feature heterogeneity across parties. We evaluate metrics such as loss, accuracy, and Membership Inference Attacks (MIAs). Our findings highlight the necessity of FU methods to ensure compliance with privacy regulations and effectively erase client contributions from the global model. Code available at: https://github.com/alessiomora/medical_federated_unlearning.
2025
Proceedings of the International Joint Conference on Neural Networks
1
7
Mora, A., Mazzocca, C., Montanari, R., Bellavista, P. (2025). Federated Unlearning in Healthcare: Why It Matters. Institute of Electrical and Electronics Engineers Inc. [10.1109/ijcnn64981.2025.11228665].
Mora, Alessio; Mazzocca, Carlo; Montanari, Rebecca; Bellavista, Paolo
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1033947
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact