首页期刊导航|Neural Networks
期刊信息/Journal information
Neural Networks
Pergamon Press
Neural Networks

Pergamon Press

0893-6080

Neural Networks/Journal Neural NetworksSCIAHCIEIISTP
正式出版
收录年代

    Observer-based adaptive neural tracking control for a class of nonlinear systems with prescribed performance and input dead-zone constraints

    Zong G.Wang Y.Karimi H.R.Shi K....
    10页
    查看更多>>摘要:? 2021 Elsevier LtdThis paper investigates the problem of output feedback neural network (NN) learning tracking control for nonlinear strict feedback systems subject to prescribed performance and input dead-zone constraints. First, an NN is utilized to approximate the unknown nonlinear functions, then a state observer is developed to estimate the unmeasurable states. Second, based on the command filter method, an output feedback NN learning backstepping control algorithm is established. Third, a prescribed performance function is employed to ensure the transient performance of the closed-loop systems and forces the tracking error to fall within the prescribed performance boundary. It is rigorously proved mathematically that all the signals in the closed-loop systems are semi-globally uniformly ultimately bounded and the tracking error can converge to an arbitrarily small neighborhood of the origin. Finally, a numerical example and an application example of the electromechanical system are given to show effectiveness of the acquired control algorithm.

    Fully corrective gradient boosting with squared hinge: Fast learning rates and early stopping

    Zeng J.Zhang M.Lin S.-B.
    16页
    查看更多>>摘要:? 2021 Elsevier LtdIn this paper, we propose an efficient boosting method with theoretical guarantees for binary classification. There are three key ingredients of the proposed boosting method: a fully corrective greedy (FCG) update, a differentiable squared hinge (also called truncated quadratic) loss function, and an efficient alternating direction method of multipliers (ADMM) solver. Compared with traditional boosting methods, on one hand, the FCG update accelerates the numerical convergence rate, and on the other hand, the squared hinge loss inherits the robustness of the hinge loss for classification and maintains the theoretical benefits of the square loss in regression. The ADMM solver with guaranteed fast convergence then provides an efficient implementation for the proposed boosting method. We conduct both theoretical analysis and numerical verification to show the outperformance of the proposed method. Theoretically, a fast learning rate of order O((m/logm)?1/2) is proved under certain standard assumptions, where m is the size of sample set. Numerically, a series of toy simulations and real data experiments are carried out to verify the developed theory.

    Command-filter-based adaptive neural tracking control for a class of nonlinear MIMO state-constrained systems with input delay and saturation

    Zhou Y.Wang X.Xu R.
    11页
    查看更多>>摘要:? 2021 Elsevier LtdThis paper investigates the problem of adaptive tracking control for a class of nonlinear multi-input and multi-output (MIMO) state-constrained systems with input delay and saturation. During the process of the control scheme, neural network is employed to approximate the unknown nonlinear uncertainties and the appropriate barrier Lyapunov function is introduced to prevent violation of the constraint. In addition, for the issue of input saturation with time delay, a smooth non-affine approximate function and a novel auxiliary system are utilized, respectively. Moreover, adaptive neural tracking control is developed by combining the command filtering backstepping approach, which effectively avoids the explosion of differentiation and reduces the computation burden. The introduced filtering error compensating system brings a significant improvement for the system tracking performance. Finally, the simulation result is presented to verify the feasibility of the proposed strategy.

    Symmetric positive definite manifold learning and its application in fault diagnosis

    Liu Y.Hu Z.Zhang Y.
    12页
    查看更多>>摘要:? 2021Locally linear embedding (LLE) is an effective tool to extract the significant features from a dataset. However, most of the relevant existing algorithms assume that the original dataset resides on a Euclidean space, unfortunately nearly all the original data space is non-Euclidean. In addition, the original LLE does not use the discriminant information of the dataset, which will degrade its performance in feature extraction. To address these problems raised in the conventional LLE, we first employ the original dataset to construct a symmetric positive definite manifold, and then estimate the tangent space of this manifold. Furthermore, the local and global discriminant information are integrated into the LLE, and the improved LLE is operated in the tangent space to extract the important features. We introduce Iris dataset to analyze the capability of the proposed method to extract features. Finally, several experiments are performed on five machinery datasets, and experimental results indicate that our proposed method can extract the excellent low-dimensional representations of the original dataset. Compared with the state-of-the-art methods, the proposed algorithm shows a strong capability for fault diagnosis.

    Efficient joint model learning, segmentation and model updating for visual tracking

    Han W.Lekamalage C.K.L.Huang G.-B.
    11页
    查看更多>>摘要:? 2021 Elsevier LtdThe Tracking-by-segmentation framework is widely used in visual tracking to handle severe appearance change such as deformation and occlusion. Tracking-by-segmentation methods first segment the target object from the background, then use the segmentation result to estimate the target state. In existing methods, target segmentation is formulated as a superpixel labeling problem constrained by a target likelihood constraint, a spatial smoothness constraint and a temporal consistency constraint. The target likelihood is calculated by a discriminative part model trained independently from the superpixel labeling framework and updated online using historical tracking results as pseudo-labels. Due to the lack of spatial and temporal constraints and inaccurate pseudo-labels, the discriminative model is unreliable and may lead to tracking failure. This paper addresses the aforementioned problems by integrating the objective function of model training into the target segmentation optimization framework. Thus, during the optimization process, the discriminative model can be constrained by spatial and temporal constraints and provides more accurate target likelihoods for part labeling, and the results produce more reliable pseudo-labels for model learning. Moreover, we also propose a supervision switch mechanism to detect erroneous pseudo-labels caused by a severe change in data distribution and switch the classifier to a semi-supervised setting in such a case. Evaluation results on OTB2013, OTB2015 and TC-128 benchmarks demonstrate the effectiveness of the proposed tracking algorithm.

    HRel: Filter pruning based on High Relevance between activation maps and class labels

    Sarvani C.H.Ghorai M.Dubey S.R.Basha S.H.S....
    12页
    查看更多>>摘要:? 2022 Elsevier LtdThis paper proposes an Information Bottleneck theory based filter pruning method that uses a statistical measure called Mutual Information (MI). The MI between filters and class labels, also called Relevance, is computed using the filter's activation maps and the annotations. The filters having High Relevance (HRel) are considered to be more important. Consequently, the least important filters, which have lower Mutual Information with the class labels, are pruned. Unlike the existing MI based pruning methods, the proposed method determines the significance of the filters purely based on their corresponding activation map's relationship with the class labels. Architectures such as LeNet-5, VGG-16, ResNet-56, ResNet-110 and ResNet-50 are utilized to demonstrate the efficacy of the proposed pruning method over MNIST, CIFAR-10 and ImageNet datasets. The proposed method shows the state-of-the-art pruning results for LeNet-5, VGG-16, ResNet-56, ResNet-110 and ResNet-50 architectures. In the experiments, we prune 97.98%, 84.85%, 76.89%, 76.95%, and 63.99% of Floating Point Operation (FLOP)s from LeNet-5, VGG-16, ResNet-56, ResNet-110, and ResNet-50 respectively. The proposed HRel pruning method outperforms recent state-of-the-art filter pruning methods. Even after pruning the filters from convolutional layers of LeNet-5 drastically (i.e., from 20, 50 to 2, 3, respectively), only a small accuracy drop of 0.52% is observed. Notably, for VGG-16, 94.98% parameters are reduced, only with a drop of 0.36% in top-1 accuracy. ResNet-50 has shown a 1.17% drop in the top-5 accuracy after pruning 66.42% of the FLOPs. In addition to pruning, the Information Plane dynamics of Information Bottleneck theory is analyzed for various Convolutional Neural Network architectures with the effect of pruning. The code is available at https://github.com/sarvanichinthapalli/HRel.

    Corrigendum to “A model of operant learning based on chaotically varying synaptic strength” [Neural Netw. 108 (2018) 114–127] (Neural Networks (2018) 108 (114–127), (S0893608018302260), (10.1016/j.neunet.2018.08.006))

    Wei T.Webb B.
    15页
    查看更多>>摘要:? 2018 The Author(s)The authors regret that there are typos in the equations 9, 16, 18, 19, 20 and 21. The experiments and results are not affected. In equation 9, the plus signs should be minuses. Equation 9 reads as: [Formula presented] It should be: [Formula presented] In equations 16, 20 and 21, the fraction signs are inline while numerators and denominators are not bracketed. Equation 16 reads as: [Formula presented] It should be: [Formula presented] Equation 20 reads as: [Formula presented] It should be: [Formula presented] Equation 21 reads as: [Formula presented] It should be: [Formula presented] In equation 18, the second and third [Formula presented]. Equation 18 reads as: [Formula presented] The correct equation 18 with a better format should be: [Formula presented] In equation 19, the second [Formula presented]. Equation 19 reads as: [Formula presented] It should be: [Formula presented] The authors would like to apologise for any inconvenience caused.