首页|基于动态联邦蒸馏的医疗对话文本生成

基于动态联邦蒸馏的医疗对话文本生成

扫码查看
为了解决传统联邦蒸馏中由数据异质造成的知识误导现象,针对医疗对话文本生成这种复杂任务,提出了一种通过动态知识融合和客户端选择进行蒸馏的模型FedKS,对知识的积累和传递进行更加精细的建模.首先,在联邦学习中设计了一种有效的知识聚合机制.其次,针对存在误导知识这一问题,提出了一种基于阈值的方法来优化每个客户端的本地模型更新选择.通过计算各客户端对全局模型性能的增益,决定是否采用知识蒸馏后的本地模型.FedKS模型可以有效解决由于误导知识导致的局部模型性能下降问题,从而实现高效的知识聚合和传递.基于多个数据集的实验表明,FedKS模型比现有基线模型训练收敛速度更快,性能更优异,并能够支持异质模型.
Medical dialogue text generation based on dynamic federated distillation
To address the knowledge misleading problem caused by data heterogeneity in traditional federated distillation,a model for generating medical dialogue texts through dynamic knowledge fusion and clients selec-tion for distillation,FedKS,is proposed,providing a more detailed modeling for knowledge accumulation and transmission procedure.Firstly,an effective knowledge aggregation mechanism is designed in federated learn-ing.Secondly,to address the issue of misleading knowledge,a threshold-based method is proposed to opti-mize the updating selection of local model on each client.By calculating the performance gain of the global model on each client,it is determined whether to adopt the local model after knowledge distillation.FedKS model can effectively address the problem of local model performance degradation caused by misleading knowledge,thereby achieving efficient knowledge aggregation and transmission.Experiments based on multi-ple benchmark datasets show that FedKS model accelerates training convergence speed and improves the per-formance compared with the existing baselines.In addition,it can support heterogeneous client models.

federated distillationknowledge aggregationclient selectionknowledge misleading

刘宇鹏、林明豪

展开 >

哈尔滨理工大学计算机科学与技术学院,哈尔滨 150006

联邦蒸馏 知识聚合 客户端选择 知识误导

国家自然科学基金资助项目中国博士后科学基金资助项目黑龙江省教育厅科学技术研究资助项目

613001152014m56133112521073

2024

东南大学学报(自然科学版)
东南大学

东南大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.989
ISSN:1001-0505
年,卷(期):2024.54(4)