首页|Pruning-Based Adaptive Federated Learning at the Edge
Pruning-Based Adaptive Federated Learning at the Edge
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
IEEE
Federated Learning (FL) is a new learning framework in which $s$ clients collaboratively train a model under the guidance of a central server. Meanwhile, with the advent of the era of large models, the parameters of models are facing explosive growth. Therefore, it is important to design federated learning algorithms for edge environment. However, the edge environment is severely limited in computing, storage, and network bandwidth resources. Concurrently, adaptive gradient methods show better performance than constant learning rate in non-distributed settings. In this paper, we propose a pruning-based distributed Adam (PD-Adam) algorithm, which combines model pruning and adaptive learning steps to achieve asymptotically optimal convergence rate of $O(1/\sqrt[4]{K})$. At the same time, the algorithm can achieve convergence consistent with the centralized model. Finally, extensive experiments have confirmed the convergence of our algorithm, demonstrating its reliability and effectiveness across various scenarios. Specially, our proposed algorithm is $2$% and $18$% more accurate than the current state-of-the-art FedAvg algorithm on the ResNet and CIFAR datasets.