首页|DBAdam:一种具有动态边界的自适应梯度下降算法

DBAdam:一种具有动态边界的自适应梯度下降算法

扫码查看
在神经网络中,梯度下降算法是优化网络权值阈值参数的核心部分,它在很大程度上影响着神经网络的性能.对于许多自适应算法,如AdaGrad、RMPprop、Adam等,虽然在训练前期收敛速度快,但它们的泛化性通常不如SGDM算法.为结合自适应算法和SGDM算法各自的优点,提出了DBAdam算法.通过利用梯度和学习率信息,构造了基于自适应学习率的动态上界函数和下界函数,将学习率约束在一个可控的范围内.这样使算法能够更好地适应不同参数的梯度变化,从而加快收敛速度.基于多种深度神经网络模型在三个基准数据集上对DBAdam算法进行实验,结果表明该算法的收敛性能较好.
DBAdam:An Adaptive Gradient Descent Algorithm with Dynamic Bounds
In the field of neural networks,gradient descent algorithms are the core component for optimizing the network weight parameters,which have a significant impact on the overall performance.Although many adaptive algorithms,such as AdaGrad,RMSProp,and Adam,tend to converge quickly during the pre-training phase,their generalization ability is often not as strong as that of the SGDM algorithm.To leverage the respective advantages of adaptive and SGDM algorithms,the DBAdam algorithm is proposed.DBAdam builded dynamic upper bound function based on the adaptive learning rate and lower bound function by using the gradient and learning rate information to constrain the learning rate within a controllable range,enabling the algorithm to better adapt to the gradient changes of different parameters to accelerate the convergence speed.The DBAdam algorithm has been evaluated on three benchmark datasets using a variety of deep neural network models,and the results demonstrate that the algorithm exhibits superior convergence performance.

neural networkadaptive algorithmSGDM algorithmconvergence

张帅、刘曜齐、姜志侠

展开 >

长春理工大学 数学与统计学院,长春 130022

吉林大学 汽车工程学院,长春 130012

神经网络 自适应算法 SGDM算法 收敛性

吉林省自然科学基金

YDZJ202201ZYTS519

2024

长春理工大学学报(自然科学版)
长春理工大学

长春理工大学学报(自然科学版)

CSTPCD
影响因子:0.432
ISSN:1672-9870
年,卷(期):2024.47(5)