重庆邮电大学学报(自然科学版)2024,Vol.36Issue(6) :1120-1127.DOI:10.3979/j.issn.1673-825X.202404110089

基于用户级差分隐私的联邦学习方案研究

Research on federated learning scheme based on user-level differential privacy

王莉芳 罗明星
重庆邮电大学学报(自然科学版)2024,Vol.36Issue(6) :1120-1127.DOI:10.3979/j.issn.1673-825X.202404110089

基于用户级差分隐私的联邦学习方案研究

Research on federated learning scheme based on user-level differential privacy

王莉芳 1罗明星1
扫码查看

作者信息

  • 1. 西南交通大学 信息科学与技术学院,成都 611756
  • 折叠

摘要

联邦学习(Federated learning,FL)作为一种安全的分布式机器学习技术,允许各参与方通过分布式合作训练一个优于各方单独训练的全局模型,而无须共享本地数据.大量研究表明,联邦学习机制仍存在安全泄露的风险,差分隐私技术被广泛应用于联邦学习来实现参与方的隐私保护.针对现有基于差分隐私的联邦学习方案在数据效用和数据隐私之间难以达到良好权衡的问题,提出一种基于用户级差分隐私的联邦学习方案UDPFL-Blur.利用本地差分隐私,保证框架中的客户端可以实现(ε,δ)-DP;为缓解差分隐私带来的模型性能下降问题,采用有界局部更新正则化技术规范本地模型更新,提高模型效用;为了进一步减轻差分隐私带来的负面影响,通过添加与客户端训练数据相关的噪声来扰动本地更新.与不同基于差分隐私的联邦学习算法的对比实验结果表明,UDPFL-Blur方案有效实现了具有用户级差分隐私保证的联邦学习方法的隐私效用权衡.

Abstract

Federated Learning(FL)is a secure distributed machine learning technique that enables participants to collabo-ratively train a global model superior to individually trained models without sharing local data.However,extensive research has revealed potential security risks in FL mechanisms.Differential privacy(DP)has been widely adopted to ensure priva-cy protection for participants in FL.To achieve an optimal balance between data utility and privacy in differential privacy-based FL schemes,this paper proposes a user-level differential privacy federated learning scheme named UDPFL-Blur.This scheme leverages local differential privacy to ensure(ε,δ)-DP compliance for clients within the framework.To address the model performance degradation caused by differential privacy,bounded local update regularization is employed to standard-ize local model updates and improve model utility.Additionally,noise related to client training data is added to perturb lo-cal updates,further mitigating the adverse effects of differential privacy.Experimental results comparing UDPFL-Blur with other differential privacy-based FL algorithms demonstrate that the proposed scheme effectively enhances the privacy-utility trade-off in user-level differential privacy-guaranteed federated learning.

关键词

联邦学习(FL)/差分隐私/正则化技术

Key words

Federated learning(FL)/differential privacy/regularization technique

引用本文复制引用

出版年

2024
重庆邮电大学学报(自然科学版)
重庆邮电大学

重庆邮电大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.66
ISSN:1673-825X
段落导航相关论文