In healthcare scenarios, privacy poses significant challenges due to the sensitivity of patient data. Federated Learning (FL) has emerged as a promising solution to unlock their potential while maintaining compliance with privacy-preserving regulations. It enables data contributors to train a global model without sharing raw data. However, FL introduces complexities in complying with the right to be forgotten, a fundamental principle of the European General Data Protection Regulation (GDPR). This right ensures clients can request the removal of their influence from the global model. Unfortunately, the intrinsic decentralized nature of FL makes retraining the model from scratch and Machine Unlearning (MU) methods unfeasible. This challenge has led to Federated Unlearning (FU), which aims to efficiently remove a client's influence through post-processing the global model. FU ensures the unlearned model performs as if the forgotten data were never seen while minimizing performance degradation on other data. As unlearning strategies typically require multiple rounds to restore model performance on retained data, this paper investigates the natural attenuation of a client's contributions over time without FU algorithms. We use the ProstateMRI dataset, a real-world federated healthcare dataset that naturally exhibits feature heterogeneity across parties. We evaluate metrics such as loss, accuracy, and Membership Inference Attacks (MIAs). Our findings highlight the necessity of FU methods to ensure compliance with privacy regulations and effectively erase client contributions from the global model. Code available at: https://github.com/alessiomora/medical_federated_unlearning.
Mora, A., Mazzocca, C., Montanari, R., Bellavista, P. (2025). Federated Unlearning in Healthcare: Why It Matters. Institute of Electrical and Electronics Engineers Inc. [10.1109/ijcnn64981.2025.11228665].
Federated Unlearning in Healthcare: Why It Matters
Mora, Alessio;Montanari, Rebecca;Bellavista, Paolo
2025
Abstract
In healthcare scenarios, privacy poses significant challenges due to the sensitivity of patient data. Federated Learning (FL) has emerged as a promising solution to unlock their potential while maintaining compliance with privacy-preserving regulations. It enables data contributors to train a global model without sharing raw data. However, FL introduces complexities in complying with the right to be forgotten, a fundamental principle of the European General Data Protection Regulation (GDPR). This right ensures clients can request the removal of their influence from the global model. Unfortunately, the intrinsic decentralized nature of FL makes retraining the model from scratch and Machine Unlearning (MU) methods unfeasible. This challenge has led to Federated Unlearning (FU), which aims to efficiently remove a client's influence through post-processing the global model. FU ensures the unlearned model performs as if the forgotten data were never seen while minimizing performance degradation on other data. As unlearning strategies typically require multiple rounds to restore model performance on retained data, this paper investigates the natural attenuation of a client's contributions over time without FU algorithms. We use the ProstateMRI dataset, a real-world federated healthcare dataset that naturally exhibits feature heterogeneity across parties. We evaluate metrics such as loss, accuracy, and Membership Inference Attacks (MIAs). Our findings highlight the necessity of FU methods to ensure compliance with privacy regulations and effectively erase client contributions from the global model. Code available at: https://github.com/alessiomora/medical_federated_unlearning.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


