Adaptive gradient descent optimization algorithm for training neural networks
Due to the increasing scale of neural networks,model training had become increasingly challenging.In response to this issue,a new adaptive optimization algorithm called Adaboundinject was proposed.Building upon the improved Adam algorithm,the Adabound algorithm introduced dynamic learning rate boundaries to facilitate a smooth transition from adaptive optimization to stochastic gradient descent(SGD).To avoid overshooting the minimum value and reduce oscillations near the minimum,a first moment was introduced into the second moment of Adabound,utilizing short-term parameter updates as weights to control parameter updates.To validate the algorithm's performance,its convergence properties were theoretically proved in a convex environment.In a non-convex environment,multiple experiments were conducted using different neural network models,comparing the algorithm with other adaptive algorithms,and demonstrating its superior performance.The experimental results indicated that the Adaboundinject algorithm held significant value in the field of deep learning optimization,effectively enhancing both the efficiency and accuracy of model training.