首页|基于模型水印的联邦学习后门攻击防御方法

基于模型水印的联邦学习后门攻击防御方法

扫码查看
联邦学习作为一种隐私保护的分布式机器学习方法,容易遭受参与方的投毒攻击,其中后门投毒攻击的高隐蔽性使得对其进行防御的难度更大.现有的多数针对后门投毒攻击的防御方案对服务器或者恶意参与方数量有着严格约束(服务器需拥有干净的根数据集,恶意参与方比例小于50%,投毒攻击不能在学习初期发起等).在约束条件无法满足时,这些方案的效果往往会大打折扣.针对这一问题,本文提出了一种基于模型水印的联邦学习后门攻击防御方法.在该方法中,服务器预先在初始全局模型中嵌入水印,在后续学习过程中,通过验证该水印是否在参与方生成的本地模型中被破坏来实现恶意参与方的检测.在模型聚合阶段,恶意参与方的本地模型将被丢弃,从而提高全局模型的鲁棒性.为了验证该方案的有效性,本文进行了一系列的仿真实验.实验结果表明该方案可以在恶意参与方比例不受限制、参与方数据分布不受限制、参与方发动攻击时间不受限制的联邦学习场景中有效检测恶意参与方发起的后门投毒攻击.同时,该方案的恶意参与方检测效率相比于现有的投毒攻击防御方法提高了45%以上.
Backdoor Attack Defense Method for Federated Learning Based on Model Watermarking
As a privacy-preserving distributed machine learning paradigm,federated learning is vulnerable to poison attacks.The high crypticity of backdoor poisoning makes it difficult to defend against.Most existing defense schemes against backdoor poisoning attacks have strict constraints on the servers or malicious participants(servers need to have a clean root dataset,the proportion of malicious participants should be less than 50%,and poisoning attacks cannot be initiated at the beginning of learning,etc.).When these constraints cannot be met,the effectiveness of these schemes will be greatly compromised.To solve this problem,this paper proposes a secure aggregation method for federated learning based on model watermarking.In this method,the server embeds a watermark in the initial global model in advance.In the subsequent learning process,it detects malicious participants by verifying whether the watermark has been destroyed in the local model generated by the participants.In the model aggregation stage,the local models uploaded by malicious participants will be discarded,thereby improving the robustness of the global model.In order to verify the effectiveness of this scheme,a series of simulation experiments were conducted.Experimental results show that this scheme can effectively detect backdoor poisoning attacks launched by malicious participants in various scenarios where the proportion of malicious participants is unlimited,the distribution of participants'data is unlimited,and the attack time of participants is unlimited.Moreover,the detection efficiency of the scheme is more than 45%higher than that of the auto encoder-based poison attack defense method.

federated learningpoisoning attackbackdoor attackanomaly detectionmodel watermarking

郭晶晶、刘玖樽、马勇、刘志全、熊宇鹏、苗可、李佳星、马建峰

展开 >

西安电子科技大学网络与信息安全学院 西安 710071

江西师范大学计算机科学技术学院 南昌 330022

暨南大学网络空间安全学院 广州 510632

联邦学习 投毒攻击 后门攻击 异常检测 模型水印

国家自然科学基金国家自然科学基金国家自然科学基金陕西省自然科学基础研究计划资助项目中央高校基本科研业务费专项中央高校基本科研业务费专项广东省网络与信息安全漏洞研究重点实验室项目广州市科技计划项目

6227219561932010620320252022JQ-603ZYTS23161216224022020B1212060081202201010421

2024

计算机学报
中国计算机学会 中国科学院计算技术研究所

计算机学报

CSTPCD北大核心
影响因子:3.18
ISSN:0254-4164
年,卷(期):2024.47(3)
  • 34