首页|面向多域数据场景的安全高效联邦学习

面向多域数据场景的安全高效联邦学习

扫码查看
针对联邦学习在不同领域数据训练中面临的泛化能力差、灾难性遗忘和隐私攻击等挑战,文中提出面向多域数据场景的安全高效联邦学习方案.在本地训练阶段,结合知识蒸馏技术,防止模型在不同领域数据训练时发生灾难性遗忘,同时加速知识在各领域间的迁移,提高训练效率.在上传阶段,提出高斯差分隐私机制,分别对本地更新的梯度和各领域间的泛化差异添加高斯噪声,实现安全上传,增强训练过程的保密性.在聚合阶段,采用动态泛化权重聚合算法,减少各领域间的泛化差异,提升模型的泛化能力.理论分析证明该方案具有较强的鲁棒性.在PACS、Office-Home数据集上的实验表明此方案具有较高的准确度和较短的训练时间.
Secure and Efficient Federated Learning for Multi-domain Data Scenarios
To tackle the challenges of poor generalization,catastrophic forgetting and privacy attacks that federated learning faces in multi-domain data training,a scheme for secure and efficient federated learning for multi-domain scenarios(SEFL-MDS)is proposed.In the local training phase,knowledge distillation technology is employed to prevent catastrophic forgetting during multi-domain data training,while accelerating knowledge transfer across domains to improve training efficiency.In the uploading phase,Gaussian noise is added to locally updated gradients and generalization differences across domains using the Gaussian differential privacy mechanism to ensure secure data uploads and enhance the confidentiality of the training process.In the aggregation phase,a dynamic generalization-weighted algorithm is utilized to reduce generalization differences across domains,thereby enhancing the generalization capability.Theoretical analysis demonstrates the high robustness of the proposed scheme.Experiments on PACS and office-Home datasets show that the proposed scheme achieves higher accuracy with reduced training time.

Federated LearningDomain GeneralizationInference AttackKnowledge DistillationDifferential Privacy

金春花、李路路、王佳浩、季玲、刘欣颖、陈礼青、张浩、翁健

展开 >

淮阴工学院计算机与软件工程学院 淮安 223003

福建师范大学福建省网络安全与密码技术重点实验室 福州 350007

暨南大学信息科学技术学院 广州 510632

联邦学习 域泛化 推理攻击 知识蒸馏 差分隐私

江苏省高等学校基础科学(自然科学)研究重大项目江苏省研究生科研与实践创新计划项目淮阴工学院研究生科技创新计划项目

23KJA520003SJCX24_2144HGYK202418

2024

模式识别与人工智能
中国自动化学会,国家智能计算机研究开发中心,中国科学院合肥智能机械研究所

模式识别与人工智能

CSTPCD北大核心
影响因子:0.954
ISSN:1003-6059
年,卷(期):2024.37(9)