查看更多>>摘要:This paper aims to analyze possible mechanisms underlying the generation of generalized periodic epileptiform discharges (GPEDs), especially to design targeted optogenetic regulation strategies. First and foremost, inspired by existing physiological experiments, we propose a new computational framework by introducing a second inhibitory neuronal population and related synaptic connections into the classic Liley mean field model. The improved model can simulate the basic normal and abnormal brain activities mentioned in previous studies, but much to our relief, it perfectly reproduces some types of GPEDs that match the clinical records. Specifically, results show that disinhibitory synaptic connections between inhibitory interneuronal populations are closely related to the occurrence, transition and termination of GPEDs, including delaying the occurrence of GPEDs caused by the excitatory AMPAergic autapses and regulating the transition process of GPEDs bidirectionally, which support the conjecture that selective changes of synaptic connections can trigger GPEDs. Additionally, we creatively offer six optogenetic strategies with dual targets. They can all control GPEDs well, just as experiments reveal that optogenetic stimulation of inhibitory interneurons can suppress abnormal activities in epilepsy or other brain diseases. More importantly, 1:1 coordinated reset stimulation with one period rest is concluded as the optimal strategy after taking into account the energy consumption and control effect. Hope these results provide feasible references for pathophysiological mechanisms of GPEDs. (C)& nbsp;2022 Elsevier Ltd. All rights reserved.
查看更多>>摘要:Learning in deep neural networks (DNNs) is implemented through minimizing a highly non-convex loss function, typically by a stochastic gradient descent (SGD) method. This learning process can effectively find generalizable solutions at flat minima. In this study, we present a novel account of how such effective deep learning emerges through the interactions of the SGD and the geometrical structure of the loss landscape. We find that the SGD exhibits rich, complex dynamics when navigating through the loss landscape; initially, the SGD exhibits superdiffusion, which attenuates gradually and changes to subdiffusion at long times when approaching a solution. Such learning dynamics happen ubiquitously in different DNN types such as ResNet, VGG-like networks and Vision Transformers; similar results emerge for various batch size and learning rate settings. The superdiffusion process during the initial learning phase indicates that the motion of SGD along the loss landscape possesses intermittent, big jumps; this non-equilibrium property enables the SGD to effectively explore the loss landscape. By adapting methods developed for studying energy landscapes in complex physical systems, we find that such superdiffusive learning processes are due to the interactions of the SGD and the fractallike regions of the loss landscape. We further develop a phenomenological model to demonstrate the mechanistic role of the fractal-like loss landscape in enabling the SGD to effectively find flat minima. Our results reveal the effectiveness of SGD in deep learning from a novel perspective and have implications for designing efficient deep neural networks.(C) 2022 Elsevier Ltd. All rights reserved.
查看更多>>摘要:A large number of neurons form cell assemblies that process information in the brain. Recent developments in measurement technology, one of which is calcium imaging, have made it possible to study cell assemblies. In this study, we aim to extract cell assemblies from calcium imaging data. We propose a clustering approach based on non-negative matrix factorization (NMF). The proposed approach first obtains a similarity matrix between neurons by NMF and then performs spectral clustering on it. The application of NMF entails the problem of model selection. The number of bases in NMF affects the result considerably, and a suitable selection method is yet to be established. We attempt to resolve this problem by model averaging with a newly defined estimator based on NMF. Experiments on simulated data suggest that the proposed approach is superior to conventional correlation-based clustering methods over a wide range of sampling rates. We also analyzed calcium imaging data of sleeping/waking mice and the results suggest that the size of the cell assembly depends on the degree and spatial extent of slow wave generation in the cerebral cortex. (c) 2022 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
查看更多>>摘要:In many real-world classification problems, the available information is often uncertain. In order to effectively describe the inherent vagueness and improve the classification performance, this paper proposes a novel possibilistic classification algorithm using support vector machines (SVMs). Based on possibility theory, the proposed algorithm aims at finding a maximal-margin fuzzy hyperplane by solving a fuzzy mathematical optimization problem Moreover, the decision function of the proposed approach is generalized such that the values assigned to the data vectors fall within a specified range and indicate the membership grade of these data vectors in the positive class. The proposed algorithm retains the advantages of fuzzy set theory and SVM theory. The proposed approach is more robust for handling data corrupted by outliers. Moreover, the structural risk minimization principle of SVMs enables the proposed approach to effectively classify the unseen data. Furthermore, the proposed algorithm has additional advantage of using vagueness parameter v for controlling the bounds on fractions of support vectors and errors. The extensive experiments performed on benchmark datasets and real applications demonstrate that the proposed algorithm has satisfactory generalization accuracy and better describes the inherent vagueness in the given dataset.(c) 2022 Elsevier Ltd. All rights reserved.
查看更多>>摘要:The global exponential synchronization issue of coupled neural networks with time-delayed impulses is investigated in this paper. On the basis of the characteristics of coupled neural networks and theorems, we have built a novel coupled systems model. In order to fit the real situation, the impulses are flexible and it can beyond the impulsive interval under certain conditions in this paper. Therefore, our results are less restrictive and more practical compared to existing research. Besides, by using average impulsive delay (AID) and average impulsive interval (AII), we investigate two different effects of impulses on synchronization respectively and get a few adequate conditions for different types of synchronization. Finally, there are two examples of numerical simulations presented to illustrate the efficiency of the conclusions.
查看更多>>摘要:We propose a novel algorithm called Backpropagation Neural Tree (BNeuralT), which is a stochastic computational dendritic tree. BNeuralT takes random repeated inputs through its leaves and imposes dendritic nonlinearities through its internal connections like a biological dendritic tree would do. Considering the dendritic-tree like plausible biological properties, BNeuralT is a single neuron neural tree model with its internal sub-trees resembling dendritic nonlinearities. BNeuralT algorithm produces an ad hoc neural tree which is trained using a stochastic gradient descent optimizer like gradient descent (GD), momentum GD, Nesterov accelerated GD, Adagrad, RMSprop, or Adam. BNeuralT training has two phases, each computed in a depth-first search manner: the forward pass computes neural tree's output in a post-order traversal, while the error backpropagation during the backward pass is performed recursively in a pre-order traversal. A BNeuralT model can be considered a minimal subset of a neural network (NN), meaning it is a "thinned "NN whose complexity is lower than an ordinary NN. Our algorithm produces high-performing and parsimonious models balancing the complexity with descriptive ability on a wide variety of machine learning problems: classification, regression, and pattern recognition. (C)& nbsp;2022 Elsevier Ltd. All rights reserved.
查看更多>>摘要:Single image super-resolution is an ill-posed problem, whose purpose is to acquire a high-resolution image from its degraded observation. Existing deep learning-based methods are compromised on their performance and speed due to the heavy design (i.e., huge model size) of networks. In this paper, we propose a novel high-performance cross-domain heterogeneous residual network for super resolved image reconstruction. Our network models heterogeneous residuals between different feature layers by hierarchical residual learning. In outer residual learning, dual-domain enhancement modules extract the frequency-domain information to reinforce the space-domain features of network mapping. In middle residual learning, wide-activated residual-in-residual dense blocks are constructed by concatenating the outputs from previous blocks as the inputs into all subsequent blocks for better parameter efficacy. In inner residual learning, wide-activated residual attention blocks are introduced to capture direction-and location-aware feature maps. The proposed method was evaluated on four benchmark datasets, indicating that it can construct the high-quality super-resolved images and achieve the state-of-the-art performance. Code and pre-trained models are available at https: //github.com/zhangyongqin/HRN. (C)& nbsp;2022 Elsevier Ltd. All rights reserved.
查看更多>>摘要:Lifelong Learning (LL) refers to the ability to continually learn and solve new problems with incremental available information over time while retaining previous knowledge. Much attention has been given lately to Supervised Lifelong Learning (SLL) with a stream of labelled data. In contrast, we focus on resolving challenges in Unsupervised Lifelong Learning (ULL) with streaming unlabelled data when the data distribution and the unknown class labels evolve over time. Bayesian framework is natural to incorporate past knowledge and sequentially update the belief with new data. We develop a fully Bayesian inference framework for ULL with a novel end-to-end Deep Bayesian Unsupervised Lifelong Learning (DBULL) algorithm, which can progressively discover new clusters without forgetting the past with unlabelled data while learning latent representations. To efficiently maintain past knowledge, we develop a novel knowledge preservation mechanism via sufficient statistics of the latent representation for raw data. To detect the potential new clusters on the fly, we develop an automatic cluster discovery and redundancy removal strategy in our inference inspired by Nonparametric Bayesian statistics techniques. We demonstrate the effectiveness of our approach using image and text corpora benchmark datasets in both LL and batch settings. (C) 2022 Elsevier Ltd. All rights reserved.
查看更多>>摘要:Probabilistic finite mixture models are widely used for unsupervised clustering. These models can often be improved by adapting them to the topology of the data. For instance, in order to classify spatially adjacent data points similarly, it is common to introduce a Laplacian constraint on the posterior probability that each data point belongs to a class. Alternatively, the mixing probabilities can be treated as free parameters, while assuming Gauss-Markov or more complex priors to regularize those mixing probabilities. However, these approaches are constrained by the shape of the prior and often lead to complicated or intractable inference. Here, we propose a new parametrization of the Dirichlet distribution to flexibly regularize the mixing probabilities of over-parametrized mixture distributions. Using the Expectation-Maximization algorithm, we show that our approach allows us to define any linear update rule for the mixing probabilities, including spatial smoothing regularization as a special case. We then show that this flexible design can be extended to share class information between multiple mixture models. We apply our algorithm to artificial and natural image segmentation tasks, and we provide quantitative and qualitative comparison of the performance of Gaussian and Student-t mixtures on the Berkeley Segmentation Dataset. We also demonstrate how to propagate class information across the layers of deep convolutional neural networks in a probabilistically optimal way, suggesting a new interpretation for feedback signals in biological visual systems. Our flexible approach can be easily generalized to adapt probabilistic mixture models to arbitrary data topologies. (C)& nbsp;2022 Elsevier Ltd. All rights reserved.
查看更多>>摘要:As a special case of multi-classification, ordinal regression (also known as ordinal classification) is a popular method to tackle the multi-class problems with samples marked by a set of ranks. Semi supervised ordinal regression (SSOR) is especially important for data mining applications because semi-supervised learning can make use of the unlabeled samples to train a high-quality learning model. However, the training of large-scale SSOR is still an open question due to its complicated formulations and non-convexity to the best of our knowledge. To address this challenging problem, in this paper, we propose an incremental learning algorithm for SSOR (IL-SSOR), which can directly update the solution of SSOR based on the KKT conditions. More critically, we analyze the finite convergence of IL-SSOR which guarantees that SSOR can converge to a local minimum based on the framework of concave-convex procedure. To the best of our knowledge, the proposed new algorithm is the first efficient on-line learning algorithm for SSOR with local minimum convergence guarantee. The experimental results show, IL-SSOR can achieve better generalization than other semi-supervised multi-class algorithms. Compared with other semi-supervised ordinal regression algorithms, our experimental results show that IL-SSOR can achieve similar generalization with less running time. (C) 2022 Elsevier Ltd. All rights reserved.