首页|PRFL:一种隐私保护联邦学习鲁棒聚合方法

PRFL:一种隐私保护联邦学习鲁棒聚合方法

扫码查看
联邦学习允许用户通过交换模型参数共同训练一个模型,能够降低数据泄露风险.但研究发现,通过模型参数仍能推断出用户隐私信息.对此,许多研究提出了模型隐私保护聚合方法.此外,恶意用户可通过提交精心构造的投毒模型破坏联邦学习聚合,且模型在隐私保护下聚合,恶意用户可以实施更加隐蔽的投毒攻击.为了在实现隐私保护的同时抵抗投毒攻击,提出了一种隐私保护联邦学习鲁棒聚合方法PRFL.PRFL不仅能够有效防御拜占庭用户发起的投毒攻击,还保证了本地模型的隐私性、全局模型的准确性和高效性.首先,提出了一种双服务器结构下轻量级模型隐私保护聚合方法,实现模型隐私保护聚合,同时保证全局模型的准确性并且不会引入开销问题;然后,提出了一种密态模型距离计算方法,在不暴露本地模型参数的同时允许双方服务器计算出模型距离,并基于该方法和局部离群因子算法(Local Outlier Factor,LOF)设计了一种投毒模型检测方法;最后,对PRFL的安全性进行了分析.在两种真实图像数据集上的实验结果表明:无攻击时,PRFL可以取得与FedAvg相近的模型准确率;PRFL在数据独立同分布(IID)和非独立同分布(Non-IID)设置下能有效防御3种先进的投毒攻击,并优于现有的 Krum,Median,Trimmed mean 方法.
PRFL:Privacy-preserving Robust Aggregation Method for Federated Learning
Federated learning allows users to train a model together by exchanging model parameters and can reduce the risk of data leakage.However,studies have found that user privacy information can still be inferred through model parameters,and many studies have proposed model privacy-preserving aggregation methods.Moreover,malicious users can corrupt federated learning aggregation by submitting carefully constructed poisoning models,and with models aggregated under privacy protection,malicious users can implement more hidden poisoning attacks.In order to implement privacy protection while resisting poisoning attacks,a privacy-preserving federated learning robust aggregation method named PRFL is proposed.PRFL can not only effectively defends against poisoning attacks launched by Byzantine users,but also guarantee the privacy of the local model,the accuracy and efficien-cy of the global model.Specifically,a lightweight model privacy-preserving aggregation method under dual-server architecture is first proposed to achieve the privacy-preserving aggregation of the model,while guaranteeing the accuracy of global model without introducing overhead problems.Then a secret model distance computation method is proposed,which allows both servers to com-pute model distances without exposing the local model parameters,and poisoning model detection method is designed based on this method and local outlier factor(LOF)algorithm.Finally,security of PRFL is analysed.Experimental results on two real image datasets show that PRFL can obtain similar model accuracy to FedAvg under no attack,and PRFL can effectively defend against three advanced poisoning attacks and outperform existing Krum,Median,and Trimmed mean methods in both the data in-dependent identically distributed(IID)and non-IID settings.

Federated learningPrivacy protectionPoisoning attackRobust aggregationOutlier

高琦、孙奕、盖新貌、王友贺、杨帆

展开 >

信息工程大学密码工程学院 郑州 450001

中国人民解放军93216部队 北京 100085

中国人民解放军61623部队 北京 100036

联邦学习 隐私保护 投毒攻击 鲁棒聚合 离群值

2024

计算机科学
重庆西南信息有限公司(原科技部西南信息中心)

计算机科学

CSTPCD北大核心
影响因子:0.944
ISSN:1002-137X
年,卷(期):2024.51(11)