首页|Pruning-Based Adaptive Federated Learning at the Edge

Pruning-Based Adaptive Federated Learning at the Edge

扫码查看
Federated Learning (FL) is a new learning framework in which $s$ clients collaboratively train a model under the guidance of a central server. Meanwhile, with the advent of the era of large models, the parameters of models are facing explosive growth. Therefore, it is important to design federated learning algorithms for edge environment. However, the edge environment is severely limited in computing, storage, and network bandwidth resources. Concurrently, adaptive gradient methods show better performance than constant learning rate in non-distributed settings. In this paper, we propose a pruning-based distributed Adam (PD-Adam) algorithm, which combines model pruning and adaptive learning steps to achieve asymptotically optimal convergence rate of $O(1/\sqrt[4]{K})$. At the same time, the algorithm can achieve convergence consistent with the centralized model. Finally, extensive experiments have confirmed the convergence of our algorithm, demonstrating its reliability and effectiveness across various scenarios. Specially, our proposed algorithm is $2$% and $18$% more accurate than the current state-of-the-art FedAvg algorithm on the ResNet and CIFAR datasets.

Computational modelingConvergenceServersAdaptation modelsTrainingData modelsFederated learningAdaptive learningComputersStochastic processes

Dongxiao Yu、Yuan Yuan、Yifei Zou、Xiao Zhang、Yu Liu、Lizhen Cui、Xiuzhen Cheng

展开 >

School of Computer Science and Technology, Shandong University, Qingdao, China

School of Software & Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China

2025

IEEE transactions on computers

IEEE transactions on computers

ISSN:
年,卷(期):2025.74(5)
  • 30