Vulnerability analysis of federated learning malware detection systems based on backdoor attacks
Deep learning has become one of the core technologies for malware detection.However,it relies on centralized training,requiring regular updates to databases and retraining to cope with the continuous evolution of malware.Federated learning,an emerging distributed learning technology,addresses these issues by training classification models locally on multiple clients and sharing the learning outcomes to build a global model,thus effectively protecting data privacy and adapting to diverse malware.Despite these advantages,federated learning's distributed nature makes it vulnerable to backdoor attacks from malicious clients.This study investigates the vulnerabilities of federated learning in malware detection and analyzes potential malicious attacks such as label flipping attacks and model poisoning attacks.Based on this analysis,a novel covert federated adaptive backdoor attack(FABA)is proposed.This attack strategy exploits the characteristics of federated learning by continuously adjusting triggers du-ring client-server interactions to maximize attack effectiveness and concealment.Testing on the Virus-MNIST and Malimg datasets demonstrates that the proposed method achieves a 100%attack success rate while maintaining high levels of stealth,with almost no impact on the prediction accuracy of clean samples.Moreover,the proposed strategy retains high attack success rates and stealth even against the latest defense mechanisms.The use of tiny triggers(only 9 pixels)and a very low proportion of malicious clients(3%)highlights the potential security risks in federated learning and provides crucial insights for future defensive strategies.