首页期刊导航|Neural Networks
期刊信息/Journal information
Neural Networks
Pergamon Press
Neural Networks

Pergamon Press

0893-6080

Neural Networks/Journal Neural NetworksSCIAHCIEIISTP
正式出版
收录年代

    Deep adversarial transition learning using cross-grafted generative stacks

    Hou, JinyongDing, XuejieDeng, Jeremiah D.Cranefield, Stephen...
    12页
    查看更多>>摘要:As a common approach of deep domain adaptation in computer vision, current works have mainly focused on learning domain-invariant features from different domains, achieving limited success in transfer learning. In this paper, we present a novel "deep adversarial transition learning "(DATL) framework that bridges the domain gap by generating some intermediate, transitional spaces between the source and target domains through the employment of adjustable, cross-grafted generative network stacks and effective adversarial learning between transitions. Specifically, variational auto-encoders (VAEs) are constructed for the domains, and bidirectional transitions are formed by cross-grafting the VAEs' decoder stacks. Generative adversarial networks are then employed to map the target domain data to the label space of the source domain, which is achieved by aligning the transitions initiated by different domains. This results in a new, effective learning paradigm, where training and testing are carried out in the associated transitional spaces instead of the original domains. Experimental results demonstrate that our method outperforms the state-of-the-art on a number of unsupervised domain adaptation benchmarks.(C) 2022 Elsevier Ltd. All rights reserved.

    Modeling learnable electrical synapse for high precision spatio-temporal recognition

    Wu, ZhenzhiZhang, ZhihongGao, HuanhuanQin, Jun...
    11页
    查看更多>>摘要:Bio-inspired recipes are being introduced to artificial neural networks for the efficient processing of spatio-temporal tasks. Among them, Leaky Integrate and Fire (LIF) model is the most remarkable one thanks to its temporal processing capability, lightweight model structure, and well investigated direct training methods. However, most learnable LIF networks generally take neurons as independent individuals that communicate via chemical synapses, leaving electrical synapses all behind. On the contrary, it has been well investigated in biological neural networks that the inter-neuron electrical synapse takes a great effect on the coordination and synchronization of generating action potentials. In this work, we are engaged in modeling such electrical synapses in artificial LIF neurons, where membrane potentials propagate to neighbor neurons via convolution operations, and the refined neural model ECLIF is proposed. We then build deep networks using ECLIF and trained them using a back-propagation-through-time algorithm. We found that the proposed network has great accuracy improvement over traditional LIF on five datasets and achieves high accuracy on them. In conclusion, it reveals that the introduction of the electrical synapse is an important factor for achieving high accuracy on realistic spatio-temporal tasks.

    Sign Stochastic Gradient Descents without bounded gradient assumption for the finite sum minimization

    Sun, TaoLi, Dongsheng
    9页
    查看更多>>摘要:Sign-based Stochastic Gradient Descents (Sign-based SGDs) use the signs of the stochastic gradients for communication costs reduction. Nevertheless, current convergence results of sign-based SGDs applied to the finite sum optimization are established on the bounded assumption of the gradient, which fails to hold in various cases. This paper presents a convergence framework about sign-based SGDs with the elimination of the bounded gradient assumption. The ergodic convergence rates are provided only with the smooth assumption of the objective functions. The Sign Stochastic Gradient Descent (siGNSGD) and its two variants, including majority vote and zeroth-order version, are developed for different application settings. Our framework also removes the bounded gradient assumption used in the previous analysis of these three algorithms. (C) 2022 Elsevier Ltd. All rights reserved.