首页|非凸增量前馈神经网络的收敛性分析

非凸增量前馈神经网络的收敛性分析

扫码查看
针对给定格式的非凸增量迭代前馈神经网络提出了一种收敛性证明,并进一步推导了相关的全局逼近性推论.同时,本文还进一步阐述了在特定非凸增量迭代下的目标函数和激活函数的关系.在此基础上,给出了随机神经元的最优选择下的收敛性证明,该结论有效地填补了随机神经元在收敛速度上的理论研究空白,也间接论证了随机神经元的全局逼近性理论的正确性.接着给出了一种随机神经元的搜索算法,该算法克服了通用的递归求导误差算法的弊病,防止神经网络陷入次优局部最优解.最后,基于几个基准回归问题进行了实验验证,实验结果支持了本文提出的理论方法的正确性和有效性.本文的研究成果不仅拓展了神经网络的理论研究领域,而且对于实际应用中神经网络的优化和改进具有一定的指导意义.
Convergence Analysis of Non-Convex Incremental Feedforward Neural Networks
This paper proposes a convergence proof for a given format of non-convex incremental feedforward neural networks,and further derives relevant global approximation conclusions.At the same time,the conditions in the main theory of this arti-cle further demonstrate the relationship between the objective function and the activation function under specific non-convex in-cremental iterations.Based on this,a convergence proof is given for the optimal choice of incremental random neurons which effectively fills the theoretical research gap of random neurons in convergence speed and indirectly verifies the correctness of the global approximation theory of random neurons.To verify the conclusion of this paper,a search algorithm with random neural elements is used,which overcomes the drawbacks of the general recursive derivative error algorithm and prevents the neural network from falling into a suboptimal local optimal solution.Finally,several benchmark regression problems are used for ex-perimental verification,and the experimental results support the correctness and effectiveness of the theoretical method pro-posed in this paper.The research results of this paper not only expand the theoretical research field of neural networks,but also have certain guiding significance for the optimization and improvement of neural networks in practical applications.

feedforward neural networksconvergence rateuniversal approximationnon-convex optimizationrandom distri-bution

张力潇、陈磊

展开 >

美的集团厨热事业部,上海 201702

前馈神经网络 收敛速度 万能逼近 非凸优化 随机分布

2024

西南师范大学学报(自然科学版)
西南大学

西南师范大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.805
ISSN:1000-5471
年,卷(期):2024.(4)