首页|Evaluating privacy loss in differential privacy based federated learning

Evaluating privacy loss in differential privacy based federated learning

扫码查看
Federated learning (FL) trains a global model by aggregating local training gradients, but private information can be leaked from these gradients. To enhance privacy, differential privacy (DP) is often used by adding artificial noise. However, this approach reduces accuracy compared to noise-free learning. Balancing privacy protection and model accuracy remains a key challenge for DP-based FL. Additionally, current methods use theoretical bounds to measure privacy loss, lacking an intuitive assessment. In this paper, we first propose an evaluation method for privacy leakage in the FL by utilizing reconstruction attacks to analyze the difference between the original images and reconstructed ones. We then formulate the problems of investigating DP's effect on the reconstruction attack, where we study the accumulative privacy loss under two different reconstruction attack settings and prove that anonymous local clients can decrease the probability of privacy leakage. Next, we study the effects of different clipping methods, including fixed constants and the median value of the unclipped gradients' norm, on privacy protection and learning performance. Furthermore, we derive the theoretical convergence analysis for the cosine similarity and l_2 -norm-based reconstruction attack under DP noise. We conduct extensive simulations to show how DP settings affect privacy leakage and characterize the trade-off between privacy protection and learning accuracy.

Federated learningDifferential privacyGradient reconstructionPrivacy leakage evaluation

Shangyin Weng、Yan Gou、Lei Zhang、Muhammad Ali Imran

展开 >

James Watt School of Engineering University of Glasgow, Glasgow, G12 8QQ, UK

2025

Future generation computer systems: FGCS

Future generation computer systems: FGCS

ISSN:0167-739X
年,卷(期):2025.172(Nov.)
  • 38