Convergence Analysis of Non-Convex Incremental Feedforward Neural Networks
This paper proposes a convergence proof for a given format of non-convex incremental feedforward neural networks,and further derives relevant global approximation conclusions.At the same time,the conditions in the main theory of this arti-cle further demonstrate the relationship between the objective function and the activation function under specific non-convex in-cremental iterations.Based on this,a convergence proof is given for the optimal choice of incremental random neurons which effectively fills the theoretical research gap of random neurons in convergence speed and indirectly verifies the correctness of the global approximation theory of random neurons.To verify the conclusion of this paper,a search algorithm with random neural elements is used,which overcomes the drawbacks of the general recursive derivative error algorithm and prevents the neural network from falling into a suboptimal local optimal solution.Finally,several benchmark regression problems are used for ex-perimental verification,and the experimental results support the correctness and effectiveness of the theoretical method pro-posed in this paper.The research results of this paper not only expand the theoretical research field of neural networks,but also have certain guiding significance for the optimization and improvement of neural networks in practical applications.