首页|求解大规模优化问题的改进海洋捕食者算法

求解大规模优化问题的改进海洋捕食者算法

扫码查看
针对海洋捕食者算法(Marine Predators Algorithm,MPA)在求解大规模问题时存在解精度低和易陷入局部最优的缺点,提出了一种改进的海洋捕食者算法(Improved Marine Predators Algorithm,IMPA)。采用Lloyd's算法初始化种群使种群个体分布均匀;基于Q-learning算法自适应选择更新策略来平衡算法的探索和开发能力;通过反向学习机制提高种群多样性避免陷入局部最优。选取 13 个大规模(100 维,500 维和 1000 维)标准测试函数进行仿真实验,结果表明,IMPA算法在求解精度和收敛速度上优于其它对比算法。
Improved Marine Predators Algorithm for Large-scale Optimization Problems
Large-scale optimization problems have been around across various domains in real life.However,these problems exhibit high-dimensional variables and complex interdependencies among variables,rendering tra-ditional optimization algorithms often ineffective and inefficient.Given that swarm intelligence optimization algorithms possess strong global search capabilities,inherent potential for parallelism,and distributive character-istics,they are more suitable for addressing large-scale optimization problems.Nevertheless,such algorithms typically suffer from difficulties with balancing exploration and exploitation stages and are prone to local optima.Hence,it is imperative to investigate improvement strategies for swarm intelligence optimization algorithms.The Marine Predators Algorithm(MPA)is a novel swarm intelligence optimization search algorithm inspired by the foraging behavior of marine predators in nature.It features simplicity in principle,ease of implementation,and minimal parameter settings.However,similar to other swarm intelligence algorithms,the MPA also has its draw-backs,necessitating corresponding improvements to enhance its effectiveness.To address the shortcomings of the MPA in achieving low solution accuracy and susceptibility to local optima when solving large-scale optimization problems,this study first utilizes the Lloyd's algorithm to initialize the prey population.This ensures an even distribution of individuals throughout the solution space,thereby enhancing the population's global search capability.Subsequently,the three position update strategies of the MPA algorithm are employed as actions,with the number of offspring individuals surpassing their parents as the state,and the reduction in the optimization objective value as the reward.By employing the Q-learning algorithm,the optimal position update strategy is determined for each iteration,thereby balancing the exploration and exploitation processes of the algorithm and preventing it from being trapped in local optima.Additionally,a reverse operation is applied to each optimized individual after each iteration,obtaining the reverse solution of the current popula-tion.This expands the search space of the IMPA algorithm,enhances the diversity of the population,and effectively prevents the algorithm from falling into local optima.Finally,this paper conducts a comparative analysis of the IMPA with several existing improved versions of the MPA across 13 high-dimensional test functions.The analysis includes algorithm complexity analysis,conver-gence speed,and optimization performance analysis,as well as a statistical characterization based on Wilcoxon tests.The results indicate that the IMPA algorithm outperforms other comparative algorithms in terms of solution accuracy and convergence speed on high-dimensional problems.It demonstrates superior convergence capability and solution accuracy in large-scale optimization problems.Currently,the discussion of the performance of the IMPA algorithm is based solely on experiments conducted on high-dimensional test functions without testing the algorithm's performance on actual large-scale optimization problems.The next step in research primarily involves applying the IMPA algorithm to specific large-scale constrained optimization problems and practical engineering issues.

Marine Predators AlgorithmLloyd's algorithmQ-learning mechanismopposition-based learning

张文宇、袁永斌、高雪、张炳晨

展开 >

西安邮电大学 经济与管理学院,陕西 西安 710061

中国航天系统科学与工程研究院,北京 100854

海洋捕食者算法 Lloyd's算法 Q-learning机制 反向学习

陕西省教育科研项目

08JK431

2024

运筹与管理
中国运筹学会

运筹与管理

CSTPCDCHSSCD北大核心
影响因子:0.688
ISSN:1007-3221
年,卷(期):2024.33(6)