首页|Convergence of Hyperbolic Neural Networks Under Riemannian Stochastic Gradient Descent

Convergence of Hyperbolic Neural Networks Under Riemannian Stochastic Gradient Descent

扫码查看
We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a Riemannian version of the Adam algo-rithm.We show numerical simulations of these algorithms on various benchmarks.

Hyperbolic neural networkRiemannian gradient descentRiemannian Adam(RAdam)Training convergence

Wes Whiting、Bao Wang、Jack Xin

展开 >

Department of Mathematics,University of California,Irvine,CA,USA

Department of Mathematics,Scientific Computing and Imaging Institute,University of Utah,Salt Lake City,UT,USA

NSF Grants at UC IrvineNSF Grants at UC IrvineNSF Grants at UC IrvineNSFNSFNSFNSFNSFDOEDOE

DMS-1854434DMS-1952644DMS-2151235DMS-1924935DMS-1952339DMS-2110145DMS-2152762DMS-2208361DE-SC0021142DE-SC0002722

2024

应用数学与计算数学学报
上海大学

应用数学与计算数学学报

影响因子:0.165
ISSN:1006-6330
年,卷(期):2024.6(2)