Dynamic Federated Learning Optimization Method in Edge Scenarios
Edge computing is a new computing paradigm that provides computational services at the network edge.Compared to traditional cloud computing,edge computing offers advantages such as high reliability and low latency.However,federated learning(FL),a distributed machine learning method,still faces challenges related to device heterogeneity and data imbalance,leading to issues like prolonged training time and low training efficiency for certain participants(edge devices).To address these challenges,we propose a dynamic federated learning optimization algorithm called FlexFL.The algorithm introduces a two-tier federated learning strategy by deploying multiple federated learning training services and a federated learning aggregation service on the same edge device.It evenly partitions the local dataset among the federated learning training services and activates a certain number of training services per round.Inactive services go into a dormant state,freeing up computing resources and redistributing them evenly among the active services to accelerate training.The algorithm balances the discrepancies in training time caused by device heterogeneity and data imbalance,thereby improving overall training efficiency.Experimental comparisons between the FlexFL algorithm and the FedAvg federated learning algorithm were conducted on the MINST dataset and CIFAR dataset,and the results demonstrate that FlexFL reduces time consumption without compromising model performance.