首页|LSPIA, (stochastic) gradient descent, and parameter correction
LSPIA, (stochastic) gradient descent, and parameter correction
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
We show that the LSPIA method for curve and surface approximation, which was introduced by Deng and Lin (2014), is equivalent to a gradient descent method. We also note that Deng and Lin's results concerning feasible values of the stepsize are directly implied by classical results about convergence properties of the gradient descent method. We propose a modification based on stochastic gradient descent, which lends itself to a realization that employs the technology of neural networks. In addition, we show how to incorporate the optimization of the parameterization of the given data into this framework via parameter correction (PC). This leads to the new LSPIA-PC method and its neural-network based implementation. Numerical experiments indicate that it gives better results than LSPIA with comparable computational costs. (c) 2021 Elsevier B.V. All rights reserved.