A client selection strategy for defending against poisoning attacks in federated learning
Federated learning is a method to address data silos.However,as adversarial models evolve,adversaries may inject harmful parameters during the training process,leading to a decrease in the models'training effectiveness.To enhance the security of the training process of federated learning,a client selection strategy for defending against poisoning attacks in federated learning is designed.In this strategy,a scoring function based on the differential privacy exponential mechanism is used to dynamically update weight parameters.First,consistent weight parameters are assigned to each client.Second,the effectiveness of each round of training is quantified and the quantified results are input into a constructed update function.Third,the server selects suitable clients for participating the current round of training based on these updated weight parameters,and aggregates the training models uploaded by these clients.The entire process is repeated over multiple rounds,until an effective and reliable training model is abtained.Finally,the feasibility of the proposed strategy is experimentally validated for adversarial poisoning attacks.