Gradient purification federated adaptive learning algorithm for Byzantine attack resistance
In the context of industrial big data,data security and privacy are key challenges.Traditional data-sharing and model-training methods struggle against risks like Byzantine and poisoning attacks,as federated learning typically as-sumes all participants are trustworthy,leading to performance drops under attacks.To address this,a Byzantine-resilient gradient purification federated adaptive learning algorithm was proposed.The malicious gradients were identified through a sliding window gradient filter and a sign-based clustering filter.The sliding window method detected anomalous gradi-ents,while the sign-based clustering filter selected adversarial gradients based on the consistency of gradient directions.After filtering,a weight-based adaptive aggregation rule was applied to perform weighted aggregation on the remaining trustworthy gradients,dynamically adjusting the weights of participant gradients to reduce the impact of malicious gradi-ents,thereby enhancing the model's robustness.Experimental results show that despite the increased intensity of new poi-soning attacks,the proposed algorithm effectively defends against these attacks while minimizing the loss in model perfor-mance.Compared to traditional defense algorithms,it not only improves model accuracy but also enhances its security.
federated learningByzantine attackpoisoning attackmodel robustnessindustrial big data