首页|一类改进的PRP型共轭梯度法

一类改进的PRP型共轭梯度法

扫码查看
近年来,随着机器学习、模糊理论、神经网络等热门领域的发展以及计算机技术的日益成熟,优化方法越来越受重视,共轭梯度法也吸引了更多学者进行深入学习和研究。目前对共轭梯度法的研究主要分为两类,第一类是直接对共轭梯度参数进行改进,第二类是将不同的共轭梯度法进行混合,例如将两种现有的共轭梯度法进行凸组合,尝试构造新算法。对于不同的混合方法,其优缺点和收敛性特征等方面存在差异。在本文中,基于两项下降的PRP方法和三项下降的PRP方法,我们提出一类下降的PRP方法,当参数取特定值时,方法分别是两项下降的PRP方法和三项下降的PRP方法。而且算法不依赖于线搜索具有充分下降性质。在适当条件下,我们证明算法在Armijo型线搜索下具有全局收敛性。数值实验测试了大规模无约束优化问题,结果表明算法是有效的。
A Class of Improved PRP Conjugate Gradient Methods
Optimization methods have been developed for recent decades,primarily using mathematical approa-ches to study the optimization paths and solutions for various systems,and providing a scientific basis for decision-makers.The purpose of optimization methods is to find the best plan for the rational use of human,material,and financial resources for the system under study,enhance and improve the system's efficiency and benefits,and ultimately achieve the optimal goal of the system.Optimization methods can be further divided into unconstrained optimization methods and constrained optimization methods.The unconstrained optimization methods include the steepest descent method,Newton's method,conjugate direction method,as well as the Conjugate Gradient method and the variable metric method.The constrained optimization methods include the simplex method,the graphical method for solving linear programming,the penalty function method for equality constraints,and the Rosen gradient projection method,among others.The Conjugate Gradient method only requires the use of first-order derivative information,but it overcomes the slow convergence of the Steepest Descent method and avoids the drawbacks of the Newton method,which requires storage and computation of the Hessian matrix and its inverse.It is characterized by low memory requirements and simple iterations,making it an effective method for solving large-scale unconstrained optimiza-tion problems.Different conjugate gradient parameters correspond to different conjugate gradient methods.In recent years,with the development of hot fields such as machine learning,fuzzy theory,neural networks,and the increasing maturity of computer technology,optimization methods have been increasingly valued,and the conjugate gradient method naturally has attracted more scholars for in-depth study and research.Current research on the conjugate gradient method is mainly divided into two categories.The first one is to directly improve the conjugate gradient parameters,and the second one is to mix different conjugate gradient methods,such as convexly combining two existing conjugate gradient methods to attempt to construct new algorithms.There are differences in the advantages and disadvantages,convergence characteristics,and other aspects of different mixing methods.Although the existing conjugate gradient method has shown excellent performance in practice,some algorithms still have limitations such as being susceptible to the influence of parameters,being applicable to specific functions,and possibly need to prove convergence under certain conditions.Therefore,issues such as the selection of convex combination parameters in the convex combination method,the optimization of new conjugate gradient methods,and proving the convergence of the algorithm under weaker search conditions are to be further researched and perfected in the later stage.In practical applications,the PRP method is considered one of the most effective conjugate gradient methods.In this paper,based on the two-term descent PRP method and the three-term descent PRP method,we propose a class of descent PRP methods.When the parameters take specific values,the methods are the two-term descent PRP method and the three-term descent PRP method respectively.Moreover,the algorithm does not rely on line search and has sufficient descent property.Under suitable conditions,we show that the algorithm is globally convergent under Armijo-type line search.The numerical results show that the algorithm is effective.

PRP methodArmijo-type line searchglobal convergenceunconstrained optimization

叶建豪、陈鸿升、郭子腾

展开 >

东莞理工学院计算机科学与技术学院,广东东莞 523000

PRP方法 Armijo型线搜索 全局收敛性 无约束优化

国家自然科学基金面上项目国家自然科学基金面上项目

1196101111971106

2024

运筹与管理
中国运筹学会

运筹与管理

CSTPCDCHSSCD北大核心
影响因子:0.688
ISSN:1007-3221
年,卷(期):2024.33(7)