Grey wolf optimization algorithm based on multi-strategy combination and its application
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
万方数据
标准灰狼优化(grey wolf optimizer,GWO)算法存在局部探索和全局开发难以平衡等问题.针对此类问题,提出基于多策略结合的灰狼优化算法(multi-strategy grey wolf optimization,MSGWO).首先,灰狼算法引入非线性收敛因子和Tent映射;然后,利用广泛学习、精英学习和协调学习三种策略,在GWO优化过程中协调工作;最后,利用轮盘赌进行策略选择,以获得更具多样性灰狼位置和更具全局代表性的个体.通过标准基准函数测试,采用算法变体进行对比.结果显示,MSGWO算法拥有较好的全局搜索、局部开发的平衡能力以及更快的收敛速度.在此基础上,利用MSGWO算法优化回声状态网络(echo state networks,ESN)超参数进行回归预测.实验表明平均绝对百分比误差为0.38%,拟合程度达到0.98,验证了MSGWO算法的优化性能.
The standard grey wolf optimizer(GWO)algorithm has issues such as difficulty balancing local exploration and global development.A multi-strategy grey wolf optimization algorithm(MSGWO),based on the fusion of various strategies,is presented to address such problems.First,the grey wolf algorithm introduces the Tent map and a nonlinear convergence factor.Then,to coor-dinate attempts in the GWO optimization process,the paper applies three learning strategies:extensive learning,elite learning,and coordinated learning.Finally,the paper uses roulette wheel for strategy selection to obtain more diverse wolf positions and globally representative individuals and utilizes benchmark function testing to compare algorithm variations.The outcomes demonstrate that the MSGWO algorithm has a faster convergence speed and a good balance between local development and global search.Based on this,the echo state networks(ESN)hyperparameter for regression prediction is optimized using the MSGWO method.The experiment demonstrates that the MSGWO algorithm performs optimally with an average absolute percentage error of 0.38 percent and a fitting degree of 0.98.
grey wolf optimizermultiple strategiesrouletteconvergence factorecho state network