计算机工程与设计2024,Vol.45Issue(12) :3521-3530.DOI:10.16208/j.issn1000-7024.2024.12.001

面向隐私保护联邦学习的激励机制

Incentive mechanism of federal learning for privacy protection

王超 龙士工 刘光源 张珺铭
计算机工程与设计2024,Vol.45Issue(12) :3521-3530.DOI:10.16208/j.issn1000-7024.2024.12.001

面向隐私保护联邦学习的激励机制

Incentive mechanism of federal learning for privacy protection

王超 1龙士工 1刘光源 1张珺铭2
扫码查看

作者信息

  • 1. 贵州大学公共大数据国家重点实验室,贵州贵阳 550025;贵州大学计算机科学与技术学院,贵州贵阳 550025
  • 2. 贵州大学公共大数据国家重点实验室,贵州贵阳 550025;贵州大学计算机科学与技术学院,贵州贵阳 550025;贵州建设职业技术学院计算机科学与技术学院,贵州贵阳 551400
  • 折叠

摘要

针对联邦学习的隐私保护和数据质量问题,提出一种前景理论与差分隐私相结合的算法.根据前景理论从数据持有者效用最大化的角度,将数据持有者的激励问题转化为效用优化问题,寻找最优奖惩策略激励用户参与联邦学习,构建一种基于前景理论的演化博弈模型.利用局部稳定性分析和数值模拟,分析该博弈模型在不同理论应用场景下的演化趋势.实验结果表明,该方法能够提高用户参与联邦训练的比例,增加最终共享的联邦学习模型的准确率,降低用户隐私泄露的风险.

Abstract

An algorithm that combined prospect theory with differential privacy was proposed to address privacy protection and data quality issues in federated learning.From the perspective of maximizing the utility of data holders based on prospect theory,the incentive problem of data holders was transformed into a utility optimization problem,and the optimal reward and punish-ment strategy was found to motivate users to participate in federated learning.An evolutionary game model based on prospect theory was constructed.The evolution trend of the game model in different theoretical application scenarios was analyzed using local stability analysis and numerical simulation.Experimental results show that the proposed method can increase the proportion of users participating in federated training,increase the accuracy of the ultimately shared federated learning model,and reduce the risk of user privacy leakage.

关键词

联邦学习/隐私保护/前景理论/差分隐私/效用优化/最优奖惩策略/演化博弈

Key words

federated learning/privacy protection/prospect theory/differential privacy/utility optimization/optimal reward and punishment strategy/evolutionary game

引用本文复制引用

出版年

2024
计算机工程与设计
中国航天科工集团二院706所

计算机工程与设计

CSTPCD北大核心
影响因子:0.617
ISSN:1000-7024
段落导航相关论文