In response to the inadequacy of existing defense methods for backdoor attacks in federated learning to effec-tively remove embedded backdoor features from models,while simultaneously reducing the accuracy of the primary task,a federated learning backdoor defense method called ContraFL was proposed,which utilized contrastive training to dis-rupt the clustering process of backdoor samples in the feature space,thereby rendering the global model classifications in federated learning independent of the backdoor trigger features.Specifically,on the server side,a trigger generation algo-rithm was developed to construct a generator pool to restore potential backdoor triggers in the training samples of the global model.Consequently,the trigger generator pool was distributed to the participants by the server,where each par-ticipant added the generated backdoor triggers to their local samples to achieve backdoor data augmentation.Experi-mental results demonstrate that ContraFL effectively defends against various backdoor attacks in federated learning,out-performing existing defense methods.