首页|一种面向联邦学习对抗攻击的选择性防御策略

一种面向联邦学习对抗攻击的选择性防御策略

扫码查看
联邦学习(FL)基于终端本地的学习以及终端与服务器之间持续地模型参数交互完成模型训练,有效地解决了集中式机器学习模型存在的数据泄露和隐私风险.但由于参与联邦学习的多个恶意终端能够在进行本地学习的过程中通过输入微小扰动即可实现对抗性攻击,并进而导致全局模型输出不正确的结果.该文提出一种有效的联邦防御策略-SelectiveFL,该策略首先建立起一个选择性联邦防御框架,然后通过在终端进行对抗性训练提取攻击特性的基础上,在服务器端对上传的本地模型更新的同时根据攻击特性进行选择性聚合,最终得到多个适应性的防御模型.该文在多个具有代表性的基准数据集上评估了所提出的防御方法.实验结果表明,与已有研究工作相比能够提升模型准确率提高了2%~11%.
A Selective Defense Strategy for Federated Learning Against Attacks
Federated Learning (FL) performs model training based on local training on clients and continuous model parameters interaction between terminals and server, which effectively solving data leakage and privacy risks in centralized machine learning models. However, since multiple malicious terminals participating in FL can achieve adversarial attacks by inputting small perturbations in the process of local learning, and then lead to incorrect results output by the global model. An effective federated defense strategy - SelectiveFL is proposed in this paper. This strategy first establishes a selective federated defense framework, and then updates the uploaded local model on the server on the basis of extracting attack characteristics through adversarial training at the terminals. At the same time, selective aggregation is carried out according to the attack characteristics, and finally multiple adaptive defense models can be obtained. Finally, the proposed defense method is evaluated on several representative benchmark datasets. The experimental results show that compared with the existing research work, the accuracy of the model can be improved by 2% to 11%.

Federated Learning (FL)Adversarial attackDefense strategyAdversarial training

陈卓、江辉、周杨

展开 >

重庆理工大学计算机科学与工程学院 重庆 400054

奥本大学计算机科学与软件工程学院 美国阿拉巴马州 奥本市 36849

联邦学习 对抗性攻击 防御机制 对抗性训练

国家自然科学基金国家自然科学基金

6147108961401076

2024

电子与信息学报
中国科学院电子学研究所 国家自然科学基金委员会信息科学部

电子与信息学报

CSTPCD北大核心
影响因子:1.302
ISSN:1009-5896
年,卷(期):2024.46(3)
  • 21