Federated learning is an emerging distributed machine learning framework that effectively solves the problems of data silos and privacy leakage in traditional machine learning by performing joint modeling training without leaving the user's private data out of the domain.However,federated learning suffers from the problem of training-lagged clients dragging down the global training speed.Related research has proposed asynchronous federated learning,which allows the users to upload to the server and participate in the aggregation task as soon as they finish updating their models locally,without waiting for the other users.However,asynchronous federated learning also suffers from the inability to recognize malicious models uploaded by malicious users and the problem of leaking user's privacy.To address these issues,a privacy-preserving Secure Aggregation scheme for asynchronous Federated Learning(SAFL)is designed.The users add perturbations to locally trained models and upload the perturbed models to the server.The server detects and rejects the malicious users through a poisoning detection algorithm to achieve Secure Aggregation(SA).Finally,theoretical analysis and experiments show that in the scenario of asynchronous federated learning,the proposed scheme can effectively detect malicious users while protecting the privacy of users'local models and reducing the risk of privacy leakage.The proposed scheme has also a significant improvement in the accuracy of the model compared with other schemes.