DBAdam:An Adaptive Gradient Descent Algorithm with Dynamic Bounds
In the field of neural networks,gradient descent algorithms are the core component for optimizing the network weight parameters,which have a significant impact on the overall performance.Although many adaptive algorithms,such as AdaGrad,RMSProp,and Adam,tend to converge quickly during the pre-training phase,their generalization ability is often not as strong as that of the SGDM algorithm.To leverage the respective advantages of adaptive and SGDM algorithms,the DBAdam algorithm is proposed.DBAdam builded dynamic upper bound function based on the adaptive learning rate and lower bound function by using the gradient and learning rate information to constrain the learning rate within a controllable range,enabling the algorithm to better adapt to the gradient changes of different parameters to accelerate the convergence speed.The DBAdam algorithm has been evaluated on three benchmark datasets using a variety of deep neural network models,and the results demonstrate that the algorithm exhibits superior convergence performance.