Defense Strategies against Poisoning Attacks in Semi-Asynchronous Federated Learning
Due to its distributed nature,federated learning(FL)is vulnerable to model poisoning attacks,where malicious clients can compromise the accuracy of the global model by sending tampered model updates.Among various FL branches,semi-asynchronous FL,with its lower real-time requirements,is particularly susceptible to such attacks.Currently,the primary means of detecting malicious clients involves analyzing the statistical characteristics of client updates,yet this approach is inadequate for semi-asynchronous FL.The noise introduced by delays in stale updates renders existing detection algorithms unable to distinguish between benign stale updates from clients and malicious updates from attackers.To address the issue of malicious client detection in semi-asynchronous FL,this paper proposed a detection method called SAFLD based on predicting model updates.By leveraging the historical updates of the model,SAFLD predicted stale updates from clients and assesses a maliciousness score,with higher-scoring clients being flagged as malicious and removed.Experimental validation on two benchmark datasets demonstrates that,compared to existing detection algorithms,SAFLD can more accurately detect various state-of-the-art model poisoning attacks in the context of semi-asynchronous FL.