As a privacy-preserving distributed machine learning paradigm,federated learning is vulnerable to poison attacks.The high crypticity of backdoor poisoning makes it difficult to defend against.Most existing defense schemes against backdoor poisoning attacks have strict constraints on the servers or malicious participants(servers need to have a clean root dataset,the proportion of malicious participants should be less than 50%,and poisoning attacks cannot be initiated at the beginning of learning,etc.).When these constraints cannot be met,the effectiveness of these schemes will be greatly compromised.To solve this problem,this paper proposes a secure aggregation method for federated learning based on model watermarking.In this method,the server embeds a watermark in the initial global model in advance.In the subsequent learning process,it detects malicious participants by verifying whether the watermark has been destroyed in the local model generated by the participants.In the model aggregation stage,the local models uploaded by malicious participants will be discarded,thereby improving the robustness of the global model.In order to verify the effectiveness of this scheme,a series of simulation experiments were conducted.Experimental results show that this scheme can effectively detect backdoor poisoning attacks launched by malicious participants in various scenarios where the proportion of malicious participants is unlimited,the distribution of participants'data is unlimited,and the attack time of participants is unlimited.Moreover,the detection efficiency of the scheme is more than 45%higher than that of the auto encoder-based poison attack defense method.