Federated Dropout has emerged as an elegant solution to conjugate communication-efficiency and computation-reduction on Federated Learning (FL) clients. We claim that Federated Dropout can also efficiently cope with device heterogeneity by exploiting a server that broadcasts custom and differently-sized sub-models, selected from a discrete set of possible sub-models, to match the computation capability constraints of FL clients. In addition, we further reduce the up-link communication cost by applying per-layer or traditional Sparse Ternary Compression (STC) to sub-model updates. We demonstrate the effectiveness of our solution by reporting results for a well-known CNN used for classification tasks considering the Federated EMNIST dataset.
Bellavista, P., Foschini, L., Mora, A. (2021). Communication-Efficient Heterogeneous Federated Dropout in Cross-device Settings. Piscataway : IEEE [10.1109/GLOBECOM46510.2021.9685710].
Communication-Efficient Heterogeneous Federated Dropout in Cross-device Settings
Bellavista, Paolo;Foschini, Luca;Mora, Alessio
2021
Abstract
Federated Dropout has emerged as an elegant solution to conjugate communication-efficiency and computation-reduction on Federated Learning (FL) clients. We claim that Federated Dropout can also efficiently cope with device heterogeneity by exploiting a server that broadcasts custom and differently-sized sub-models, selected from a discrete set of possible sub-models, to match the computation capability constraints of FL clients. In addition, we further reduce the up-link communication cost by applying per-layer or traditional Sparse Ternary Compression (STC) to sub-model updates. We demonstrate the effectiveness of our solution by reporting results for a well-known CNN used for classification tasks considering the Federated EMNIST dataset.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.