Federated Learning Scheme Based on Differential Privacy
One of the characteristics of federated learning is that the server being trained does not directly contact the data,so federated learning itself has the characteristics of protecting data security.However,research shows that federated learning has privacy leakage problems in local data training and central model aggregation.Differential privacy is a noise augmentation tech-nique that adds appropriate noise to prevent an attacker from distinguishing user information.We study a hybrid noise adding al-gorithm based on local and central differential privacy(LCDP-FL),which can provide local or hybrid differential privacy protec-tion for each client according to its different weights and privacy requirements.It's shown that the algorithm can provide users with the privacy they need with minimal computational overhead.The algorithm is tested on the MNIST dataset and CIFAR-10 dataset,and compared with local differential privacy(LDP-FL)and central differential privacy(CDP-FL)algorithms,and the re-sults show that the hybrid algorithm has improved accuracy,loss rate and privacy security,and its algorithm performance is the best.