首页|PFKD:综合考虑数据异构和模型异构的个性化联邦学习框架

PFKD:综合考虑数据异构和模型异构的个性化联邦学习框架

扫码查看
联邦学习是解决机器学习中数据共享和隐私保护两个关键难题的重要方法.然而,联邦学习本身也面临着数据异构和模型异构的挑战.现有研究往往只专注于解决其中一个方面的问题,忽视了两者之间的关联性.为此,本文提出了一个名为PFKD的框架,该框架通过知识蒸馏技术解决模型异构问题,通过个性化算法解决数据异构问题,以实现更具个性化的联邦学习.通过实验分析验证了所提出框架的有效性.实验结果显示,该框架能够突破模型的性能瓶颈,提高模型精度约1个百分点.此外,在调整适当的超参数后,该框架的性能得到进一步提升.
PFKD:a personalized federated learning framework that integrates data heterogeneity and model heterogeneity
Federated learning is an important method to address two critical challenges in machine learning:data sharing and privacy protection.However,federated learning itself faces challenges related to data heterogeneity and model heterogeneity.Existing researches often focus on addressing one of these issues while overlook the correlation between them.To address this,this paper introduces a framework named PFKD(Personalized Federated learning based on Knowledge Distillation).This framework utilizes knowledge distillation techniques to address model hetero-geneity and personalized algorithms to tackle data heterogeneity,thereby achieving more personalized federated learning.Experimental analysis validates the effectiveness of the proposed framework.The experimental results dem-onstrate that the framework can overcome model performance bottlenecks and improve model accuracy by approxi-mately one percentage point.Furthermore,with appropriate hyperparameter adjustment,the framework's performance is further enhanced.

federated learningdata heterogeneitymodel heterogeneity

陈学斌、任志强

展开 >

华北理工大学理学院/河北省数据科学与应用重点实验室/唐山市数据科学重点实验室,唐山,063210

联邦学习 数据异构 模型异构

国家自然科学基金

U20A20179

2024

南京信息工程大学学报
南京信息工程大学

南京信息工程大学学报

CSTPCD北大核心
影响因子:0.737
ISSN:1674-7070
年,卷(期):2024.16(4)