首页|基于差分隐私的联邦学习方案

基于差分隐私的联邦学习方案

扫码查看
联邦学习的特点之一是进行训练的服务器并不直接接触数据,因此联邦学习本身就具有保护数据安全的特性.但是研究表明,联邦学习在本地数据训练和中心模型聚合等方面均存在隐私泄露的问题.差分隐私是一种加噪技术,通过加入适当噪声达到攻击者区分不出用户信息的目的.文中研究了一种基于本地和中心差分隐私的混合加噪算法(LCDP-FL),该算法能根据各个客户端不同权重、不同隐私需求,为这些客户端提供本地或混合差分隐私保护.而且我们证明该算法能够在尽可能减少计算开支的同时,为用户提供他们所需的隐私保障.在MNIST数据集和CIFAR-10数据集上对该算法进行了测试,并与本地差分隐私(LDP-FL)和中心差分隐私(CDP-FL)等算法进行对比,结果显示该混合算法在精确度、损失率和隐私安全方面均有改进,其算法性能最优.
Federated Learning Scheme Based on Differential Privacy
One of the characteristics of federated learning is that the server being trained does not directly contact the data,so federated learning itself has the characteristics of protecting data security.However,research shows that federated learning has privacy leakage problems in local data training and central model aggregation.Differential privacy is a noise augmentation tech-nique that adds appropriate noise to prevent an attacker from distinguishing user information.We study a hybrid noise adding al-gorithm based on local and central differential privacy(LCDP-FL),which can provide local or hybrid differential privacy protec-tion for each client according to its different weights and privacy requirements.It's shown that the algorithm can provide users with the privacy they need with minimal computational overhead.The algorithm is tested on the MNIST dataset and CIFAR-10 dataset,and compared with local differential privacy(LDP-FL)and central differential privacy(CDP-FL)algorithms,and the re-sults show that the hybrid algorithm has improved accuracy,loss rate and privacy security,and its algorithm performance is the best.

Federated learningDifferential privacyPrivacy protectionHybrid noiseGradient descent

孙敏、丁希宁、成倩

展开 >

山西大学计算机与信息技术学院 太原 030000

联邦学习 差分隐私 隐私保护 混合加噪 梯度下降

山西省基础研究计划山西省基础研究计划

20210302123455201701D121052

2024

计算机科学
重庆西南信息有限公司(原科技部西南信息中心)

计算机科学

CSTPCD北大核心
影响因子:0.944
ISSN:1002-137X
年,卷(期):2024.51(z1)
  • 11