首页|Evaluating privacy loss in differential privacy based federated learning
Evaluating privacy loss in differential privacy based federated learning
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Elsevier
Federated learning (FL) trains a global model by aggregating local training gradients, but private information can be leaked from these gradients. To enhance privacy, differential privacy (DP) is often used by adding artificial noise. However, this approach reduces accuracy compared to noise-free learning. Balancing privacy protection and model accuracy remains a key challenge for DP-based FL. Additionally, current methods use theoretical bounds to measure privacy loss, lacking an intuitive assessment. In this paper, we first propose an evaluation method for privacy leakage in the FL by utilizing reconstruction attacks to analyze the difference between the original images and reconstructed ones. We then formulate the problems of investigating DP's effect on the reconstruction attack, where we study the accumulative privacy loss under two different reconstruction attack settings and prove that anonymous local clients can decrease the probability of privacy leakage. Next, we study the effects of different clipping methods, including fixed constants and the median value of the unclipped gradients' norm, on privacy protection and learning performance. Furthermore, we derive the theoretical convergence analysis for the cosine similarity and l_2 -norm-based reconstruction attack under DP noise. We conduct extensive simulations to show how DP settings affect privacy leakage and characterize the trade-off between privacy protection and learning accuracy.