计算机学报2024,Vol.47Issue(3) :662-676.DOI:10.11897/SP.J.1016.2024.00662

基于模型水印的联邦学习后门攻击防御方法

Backdoor Attack Defense Method for Federated Learning Based on Model Watermarking

郭晶晶 刘玖樽 马勇 刘志全 熊宇鹏 苗可 李佳星 马建峰
计算机学报2024,Vol.47Issue(3) :662-676.DOI:10.11897/SP.J.1016.2024.00662

基于模型水印的联邦学习后门攻击防御方法

Backdoor Attack Defense Method for Federated Learning Based on Model Watermarking

郭晶晶 1刘玖樽 1马勇 2刘志全 3熊宇鹏 1苗可 1李佳星 1马建峰1
扫码查看

作者信息

  • 1. 西安电子科技大学网络与信息安全学院 西安 710071
  • 2. 江西师范大学计算机科学技术学院 南昌 330022
  • 3. 暨南大学网络空间安全学院 广州 510632
  • 折叠

摘要

联邦学习作为一种隐私保护的分布式机器学习方法,容易遭受参与方的投毒攻击,其中后门投毒攻击的高隐蔽性使得对其进行防御的难度更大.现有的多数针对后门投毒攻击的防御方案对服务器或者恶意参与方数量有着严格约束(服务器需拥有干净的根数据集,恶意参与方比例小于50%,投毒攻击不能在学习初期发起等).在约束条件无法满足时,这些方案的效果往往会大打折扣.针对这一问题,本文提出了一种基于模型水印的联邦学习后门攻击防御方法.在该方法中,服务器预先在初始全局模型中嵌入水印,在后续学习过程中,通过验证该水印是否在参与方生成的本地模型中被破坏来实现恶意参与方的检测.在模型聚合阶段,恶意参与方的本地模型将被丢弃,从而提高全局模型的鲁棒性.为了验证该方案的有效性,本文进行了一系列的仿真实验.实验结果表明该方案可以在恶意参与方比例不受限制、参与方数据分布不受限制、参与方发动攻击时间不受限制的联邦学习场景中有效检测恶意参与方发起的后门投毒攻击.同时,该方案的恶意参与方检测效率相比于现有的投毒攻击防御方法提高了45%以上.

Abstract

As a privacy-preserving distributed machine learning paradigm,federated learning is vulnerable to poison attacks.The high crypticity of backdoor poisoning makes it difficult to defend against.Most existing defense schemes against backdoor poisoning attacks have strict constraints on the servers or malicious participants(servers need to have a clean root dataset,the proportion of malicious participants should be less than 50%,and poisoning attacks cannot be initiated at the beginning of learning,etc.).When these constraints cannot be met,the effectiveness of these schemes will be greatly compromised.To solve this problem,this paper proposes a secure aggregation method for federated learning based on model watermarking.In this method,the server embeds a watermark in the initial global model in advance.In the subsequent learning process,it detects malicious participants by verifying whether the watermark has been destroyed in the local model generated by the participants.In the model aggregation stage,the local models uploaded by malicious participants will be discarded,thereby improving the robustness of the global model.In order to verify the effectiveness of this scheme,a series of simulation experiments were conducted.Experimental results show that this scheme can effectively detect backdoor poisoning attacks launched by malicious participants in various scenarios where the proportion of malicious participants is unlimited,the distribution of participants'data is unlimited,and the attack time of participants is unlimited.Moreover,the detection efficiency of the scheme is more than 45%higher than that of the auto encoder-based poison attack defense method.

关键词

联邦学习/投毒攻击/后门攻击/异常检测/模型水印

Key words

federated learning/poisoning attack/backdoor attack/anomaly detection/model watermarking

引用本文复制引用

基金项目

国家自然科学基金(62272195)

国家自然科学基金(61932010)

国家自然科学基金(62032025)

陕西省自然科学基础研究计划资助项目(2022JQ-603)

中央高校基本科研业务费专项(ZYTS23161)

中央高校基本科研业务费专项(21622402)

广东省网络与信息安全漏洞研究重点实验室项目(2020B1212060081)

广州市科技计划项目(202201010421)

出版年

2024
计算机学报
中国计算机学会 中国科学院计算技术研究所

计算机学报

CSTPCDCSCD北大核心
影响因子:3.18
ISSN:0254-4164
被引量1
参考文献量34
段落导航相关论文