首页|联邦学习中防御投毒攻击的客户端筛选策略

联邦学习中防御投毒攻击的客户端筛选策略

扫码查看
联邦学习是一种解决数据孤岛问题的方法,但随着攻击模型的不断进化,敌手可能在训练过程中注入有害参数,导致模型训练效果下降.为了增强联邦学习模型训练过程的安全性,设计了一种面向联邦学习投毒攻击的客户端筛选策略.在该策略中,利用基于差分隐私指数机制的评分函数来动态更新权重参数.首先,为每个客户端分配一致的权重参数;然后,将每一轮训练的效果作为评估标准进行量化,并将量化结果传递至所构建的更新函数中;接着,服务器根据这些更新后的权重参数,筛选出适合参与本轮训练的客户端,并对这些客户端上传的训练模型进行聚合.整个流程反复进行多轮次,最终得出一个有效可靠的训练模型.最后,通过实验验证了所提策略在面对敌手投毒攻击下的可行性.
A client selection strategy for defending against poisoning attacks in federated learning
Federated learning is a method to address data silos.However,as adversarial models evolve,adversaries may inject harmful parameters during the training process,leading to a decrease in the models'training effectiveness.To enhance the security of the training process of federated learning,a client selection strategy for defending against poisoning attacks in federated learning is designed.In this strategy,a scoring function based on the differential privacy exponential mechanism is used to dynamically update weight parameters.First,consistent weight parameters are assigned to each client.Second,the effectiveness of each round of training is quantified and the quantified results are input into a constructed update function.Third,the server selects suitable clients for participating the current round of training based on these updated weight parameters,and aggregates the training models uploaded by these clients.The entire process is repeated over multiple rounds,until an effective and reliable training model is abtained.Finally,the feasibility of the proposed strategy is experimentally validated for adversarial poisoning attacks.

federated learningpoisoning attacksdifferential privacyexponential mechanisms

徐鹤、张迪、李鹏、季一木

展开 >

南京邮电大学计算机学院,江苏南京 210023

南京邮电大学网络安全与可信计算研究所,江苏南京 210023

江苏省高性能计算与智能处理工程研究中心,江苏南京 210023

联邦学习 投毒攻击 差分隐私 指数机制

2024

南京邮电大学学报(自然科学版)
南京邮电大学

南京邮电大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.486
ISSN:1673-5439
年,卷(期):2024.44(6)