摘要
联邦学习(Federated learning,FL)作为一种安全的分布式机器学习技术,允许各参与方通过分布式合作训练一个优于各方单独训练的全局模型,而无须共享本地数据.大量研究表明,联邦学习机制仍存在安全泄露的风险,差分隐私技术被广泛应用于联邦学习来实现参与方的隐私保护.针对现有基于差分隐私的联邦学习方案在数据效用和数据隐私之间难以达到良好权衡的问题,提出一种基于用户级差分隐私的联邦学习方案UDPFL-Blur.利用本地差分隐私,保证框架中的客户端可以实现(ε,δ)-DP;为缓解差分隐私带来的模型性能下降问题,采用有界局部更新正则化技术规范本地模型更新,提高模型效用;为了进一步减轻差分隐私带来的负面影响,通过添加与客户端训练数据相关的噪声来扰动本地更新.与不同基于差分隐私的联邦学习算法的对比实验结果表明,UDPFL-Blur方案有效实现了具有用户级差分隐私保证的联邦学习方法的隐私效用权衡.
Abstract
Federated Learning(FL)is a secure distributed machine learning technique that enables participants to collabo-ratively train a global model superior to individually trained models without sharing local data.However,extensive research has revealed potential security risks in FL mechanisms.Differential privacy(DP)has been widely adopted to ensure priva-cy protection for participants in FL.To achieve an optimal balance between data utility and privacy in differential privacy-based FL schemes,this paper proposes a user-level differential privacy federated learning scheme named UDPFL-Blur.This scheme leverages local differential privacy to ensure(ε,δ)-DP compliance for clients within the framework.To address the model performance degradation caused by differential privacy,bounded local update regularization is employed to standard-ize local model updates and improve model utility.Additionally,noise related to client training data is added to perturb lo-cal updates,further mitigating the adverse effects of differential privacy.Experimental results comparing UDPFL-Blur with other differential privacy-based FL algorithms demonstrate that the proposed scheme effectively enhances the privacy-utility trade-off in user-level differential privacy-guaranteed federated learning.