首页|基于机器学习的演化多任务优化框架

基于机器学习的演化多任务优化框架

扫码查看
演化多任务优化是近年来计算智能领域的研究热点之一,其原理是通过任务间的知识转移提高演化算法同时求解多个任务的效率.由于任务间相似性对促进任务之间的正向知识转移具有重要的影响,因此,如何度量任务间的相似性成为了重点研究方向之一.目前,演化多任务优化在处理两个任务时,辅助任务的选取仅限于两者之一,且在处理超多任务时对任务间知识的转移缺乏灵活性.为此,本文提出一个基于机器学习的演化多任务优化框架,命名为MaTML.该框架联合所有任务关联的子种群形成一个统一的初始化种群,利用目标任务的技能因子及其对应的种群个体分别构建标签和训练集,应用十折交叉法拟合模型,并运用模型预测与目标任务相似的个体以组成辅助种群,从而促进演化优化中的正向知识转移.本文提出的算法能够在动态的种群个体中找到目标任务的辅助种群,不仅可以为三个或以上的多任务优化灵活地选取相似辅助任务,而且解决了当任务数量为两个时有效地选择辅助任务的问题.通过与现阶段的多任务算法和超多任务算法分别在CEC2017问题测试集和WCCI2020SO问题测试集进行比较,实验结果证实MaTML在优化多任务问题时具有更优或竞争性的性能.此外,文中还详细研究了 MaTML的计算资源、模型性能、模型稳定性以及相关组件.最后,本文还基于真实问题的测试进一步验证了 MaTML的有效性.
Evolutionary Many-Task Optimization Framework Based on Machine Learning
Evolutionary multitasking optimization is one of the research hotspots in the field of computational intelligence in recent years.Its principle is to enhance the efficiency of evolutionary algorithms to solve multiple tasks simultaneously through knowledge transfer between tasks.Since inter-task similarity plays an important role in promoting the positive knowledge transfer between tasks,how to measure the similarity between tasks has become one of the key research directions.At present,the existing evolutionary multitasking algorithms can be divided into evolutionary multi-task algorithms and evolutionary many-task algorithms according to the number of tasks processed.However,these algorithms are not efficient enough in handling both multi task and multi task problems simultaneously.For example,when evolutionary multitasking optimization tackles two tasks,the selection of auxiliary task is limited to one of them,and can lead to the lack of flexibility of knowledge transfer between tasks in the aspect of dealing with many-task.In addition,the dynamic nature of task knowledge transfer should occur across tasks,rather than limited to a certain task or several tasks,which requires the algorithm to have the ability to adaptively find other individuals similar to the target task individuals in iteration.Based on machine learning,this paper proposes a framework for solving evolutionary multitasking optimization,named as MaTML,which combines all task-associated subpopulations to form a unified initial populations.MaTML employs the skill factors of the target task and its corresponding population individuals to construct labels and training sets respectively,utilizes the 10-fold cross-validation to fit the model,and then applies the model to predict that population individuals similar to the target task compose auxiliary populations,so as to achieve more positive knowledge transfer in evolutionary optimization.Specifically,in each iteration,the skill factor of the target task is taken as the training label,and all the individuals corresponding to the label,that is,the target task individuals,are combined to form the training sample.Then the model is trained based on the machine learning algorithm,and the model is applied to predict the individuals in the population except the target task individuals.Since the characteristics of the individuals correctly predicted by the algorithm are essentially similar to those of the target task individuals,these individuals can be labeled as auxiliary individuals of the target task.The proposed algorithm can find the auxiliary populations of the target task in the dynamic population individuals,so it can not only flexibly select similar auxiliary tasks for three or more tasks,but also solve the problem of effectively selecting auxiliary tasks at that time the number of tasks is two.What's more,the selection of population individuals for auxiliary tasks of the target task is variable and dynamically adaptive in MaTML,and there is no need to associate a subpopulation with the target task.Compared with state-of-the-art multi-task algorithms and many-task algorithms in CEC2017 test suite and WCCI2020SO test suite respectively,the experimental results show that MaTML has superior or competitive performance in optimizing multi-task or many-task problems.We also conducted a detailed study on MaTML's computing resources,model performance,model stability,and related components.Finally,an optimization problem of real-world was served as test to further verify the effectiveness of MaTML.

evolutionary multitasking optimizationmachine learninginter-task similarityknowledge transferauxiliary task

麦伟杰、刘伟莉、钟竞辉

展开 >

华南理工大学计算机科学与工程学院 广州 510006

广东技术师范大学计算机科学学院 广州 510665

演化多任务优化 机器学习 任务间相似性 知识转移 辅助任务

国家自然科学基金广东省基础与应用基础研究基金广东省基础与应用基础研究基金

620760982021A15151100722023A1515012291

2024

计算机学报
中国计算机学会 中国科学院计算技术研究所

计算机学报

CSTPCD北大核心
影响因子:3.18
ISSN:0254-4164
年,卷(期):2024.47(1)
  • 2