A Class of Improved PRP Conjugate Gradient Methods
Optimization methods have been developed for recent decades,primarily using mathematical approa-ches to study the optimization paths and solutions for various systems,and providing a scientific basis for decision-makers.The purpose of optimization methods is to find the best plan for the rational use of human,material,and financial resources for the system under study,enhance and improve the system's efficiency and benefits,and ultimately achieve the optimal goal of the system.Optimization methods can be further divided into unconstrained optimization methods and constrained optimization methods.The unconstrained optimization methods include the steepest descent method,Newton's method,conjugate direction method,as well as the Conjugate Gradient method and the variable metric method.The constrained optimization methods include the simplex method,the graphical method for solving linear programming,the penalty function method for equality constraints,and the Rosen gradient projection method,among others.The Conjugate Gradient method only requires the use of first-order derivative information,but it overcomes the slow convergence of the Steepest Descent method and avoids the drawbacks of the Newton method,which requires storage and computation of the Hessian matrix and its inverse.It is characterized by low memory requirements and simple iterations,making it an effective method for solving large-scale unconstrained optimiza-tion problems.Different conjugate gradient parameters correspond to different conjugate gradient methods.In recent years,with the development of hot fields such as machine learning,fuzzy theory,neural networks,and the increasing maturity of computer technology,optimization methods have been increasingly valued,and the conjugate gradient method naturally has attracted more scholars for in-depth study and research.Current research on the conjugate gradient method is mainly divided into two categories.The first one is to directly improve the conjugate gradient parameters,and the second one is to mix different conjugate gradient methods,such as convexly combining two existing conjugate gradient methods to attempt to construct new algorithms.There are differences in the advantages and disadvantages,convergence characteristics,and other aspects of different mixing methods.Although the existing conjugate gradient method has shown excellent performance in practice,some algorithms still have limitations such as being susceptible to the influence of parameters,being applicable to specific functions,and possibly need to prove convergence under certain conditions.Therefore,issues such as the selection of convex combination parameters in the convex combination method,the optimization of new conjugate gradient methods,and proving the convergence of the algorithm under weaker search conditions are to be further researched and perfected in the later stage.In practical applications,the PRP method is considered one of the most effective conjugate gradient methods.In this paper,based on the two-term descent PRP method and the three-term descent PRP method,we propose a class of descent PRP methods.When the parameters take specific values,the methods are the two-term descent PRP method and the three-term descent PRP method respectively.Moreover,the algorithm does not rely on line search and has sufficient descent property.Under suitable conditions,we show that the algorithm is globally convergent under Armijo-type line search.The numerical results show that the algorithm is effective.
PRP methodArmijo-type line searchglobal convergenceunconstrained optimization