工业物联网中基于信息熵的联邦增量学习算法与优化
Entropy-based Federated Incremental Learning and Optimization in Industrial Internet of Things
杨睿哲 1谢欣儒 1滕颖蕾 2李萌 1孙艳华 1张大君3
作者信息
- 1. 北京工业大学信息学部 北京 100124;北京工业大学先进信息网络北京实验室 北京 100124
- 2. 北京邮电大学电子工程学院 北京 100083
- 3. 卡尔顿大学 渥太华 K1S 5B6
- 折叠
摘要
面对工业生产过程中大规模、多样且随时间增长的数据和机器学习任务,该文提出一种基于信息熵的联邦增量学习(FIL)与优化方法.基于联邦框架,各本地计算节点可利用本地数据进行模型训练,并计算信息平均熵上传至服务器,以此辅助识别类增任务;全局服务器则根据本地反馈的平均熵选择参与当前轮次训练的本地节点,并判决任务是否产生增量后,进行全局模型下发与聚合更新.所提方法结合平均熵和阈值进行不同情况下的节点选择,实现低平均熵下的模型稳定学习和高平均熵下的模型增量式扩展.在此基础上,采用凸优化,在资源有限的情况下自适应地调整聚合频率和资源分配,最终实现模型的有效收敛.仿真结果表明,在不同的情景下,该文所提方法都可以加速模型收敛并提升训练精度.
Abstract
In the face of large-scale,diverse,and time-evolving data,as well as machine learning tasks in industrial production processes,a Federated Incremental Learning(FIL)and optimization method based on information entropy is proposed in this paper.Within the federated framework,local computing nodes utilize local data for model training,and compute the average entropy to be transmitted to the server to assist in identifying class-incremental tasks.The global server then selects local nodes for current round training based on the locally provided average entropy and makes decisions on task incrementality,followed by global model deployment and aggregation updates.The proposed method combines average entropy and thresholds for nodes selection in various situations,achieving stable model learning under low average entropy and incremental model expansion under high average entropy.Additionally,convex optimization is employed to adaptively adjust aggregation frequency and resource allocation in resource-constrained scenarios,ultimately achieving effective model convergence.Simulation results demonstrate that the proposed method accelerates model convergence and enhances training accuracy in different scenarios.
关键词
工业物联网/联邦增量学习/信息平均熵Key words
Industrial Internet of Things(IIoT)/Federated Incremental Learning(FIL)/Information entropy引用本文复制引用
基金项目
国家自然科学基金(62171062)
国家自然科学基金(62371012)
出版年
2024