首页|Universality of gradient descent neural network training
Universality of gradient descent neural network training
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
It has been observed that design choices of neural networks are often crucial for their successful optimization. In this article, we therefore discuss the question if it is always possible to redesign a neural network so that it trains well with gradient descent. This yields the following universality result: If, for a given network, there is any algorithm that can find good network weights for a classification task, then there exists an extension of this network that reproduces the same forward model by mere gradient descent training. The construction is not intended for practical computations, but it provides some orientation on the possibilities of pre-trained networks in meta-learning and related approaches. (C)& nbsp;2022 Elsevier Ltd. All rights reserved.
Deep neural networksUniversalityGlobal minimaTuring machinesMeta-learningBiologically plausible learning