Privacy-Preserving Resource Allocation for Asynchronous Federated Learning in wireless networks
Asynchronous federated learning(AFL)has become a solution to the inefficiency of synchronous federated learning(SFL).However,AFL still faces challenges such as limited communication and computational resources,as well as security threats in wireless networks.This paper proposes a new two-stage proximal policy optimization algorithm framework that combines Transformer encoders.The framework jointly optimizes the learning latency,energy consumption,and model accuracy while ensuring physical layer security through collaborative jamming by devices.Extensive simulation results show that the proposed approach can reduce training latency and energy consumption by 74.2%compared to baseline when the required test accuracy is 0.9.