查看更多>>摘要:This study addresses the traffic light scheduling problem for pedestrian-vehicle mixed-flow networks. A macroscopic model, which strikes an appropriate balance between pedestrians' needs and vehicle drivers' needs, is employed to describe the traffic light scheduling problem in a scheduling framework. The objective of this problem is to minimize the total network-wise delay time of vehicles and pedestrians within a given finite-time window, which is crucial to avoid traffic congestion in urban road networks. To achieve this objective, the present study first uses a well-known optimization solver called GUROBI to obtain the optimal solution by converting the problem into mixed-integer linear programming. The obtained results indicate the computational inefficiency of the solver for large network sizes. To overcome this computational inefficiency, three novel metaheuristic methods based on the sine-cosine algorithm are proposed. These methods are denoted by discrete sine- cosine algorithm, discrete sine-cosine algorithm with local search operator, and discrete sine-cosine algorithm with local search operator and memory utilization inspired by harmony search. Each of these methods is developed hierarchically by taking the advantages of previously developed method(s) in terms of a better search process to provide more accurate solutions and a better convergence rate. To validate all these proposed metaheuristics, extensive computational experiments are carried out using the real traffic infrastructure of Singapore. Moreover, various performance measures such as statistical optimization results, relative percentage deviation, computational time, statistical analysis, and convergence behavior analysis have been employed to evaluate the performance of algorithms. The comparison of the proposed SCA variants is done with GUROBI solver and other metaheuristics namely, harmony search, firefly algorithm, bat algorithm, artificial bee colony, genetic algorithm, salp swarm algorithm, and harris hawks optimization. Overall comparison analysis concludes that the proposed methods are very efficient to solve the traffic light scheduling problem for pedestrian-vehicle mixed-flow networks with different network sizes and prediction time horizons.(C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:We present a new technique for link weight prediction, the Link Weight Prediction Weisfeiler-Lehman (LWP-WL) method that learns from graph structure features and link relationship patterns. Inspired by the Weisfeiler-Lehman Neural Machine, LWP-WL extracts an enclosing subgraph for the target link and applies a graph labelling algorithm for weighted graphs to provide an ordered subgraph adjacency matrix into a neural network. The neural network contains a Convolutional Neural Network in the first layer that applies special filters adapted to the input graph representation. An extensive evaluation is provided that demonstrates an improvement over the state-of-the-art methods in several weighted graphs. Furthermore, we conduct an ablation study to show how adding different features to our approach improves our technique's performance. Finally, we also perform a study on the complexity and scalability of our algorithm. Unlike other approaches, LWP-WL does not rely on a specific graph heuristic and can perform well in different kinds of graphs. (C) 2022 The Author(s). Published by Elsevier B.V.
查看更多>>摘要:Alzheimer's disease (AD) has become a severe chronic disease that affects the health of the elderly all over the world. And the number of patients currently suffering continues to rise each year. With the rapid development of medical imaging technology, although researchers have done extensive works on the diagnosis of AD through new computer vision technology, it is still a challenge to realize the diagnosis of AD and Mild Cognitive Impairment (MCI) as precise as possible end-to-end by relying on Magnetic Resonance Imaging (MRI) image resources. In this paper, a new variant model of the Broad Learning System (BLS) for accurate diagnosis of AD and MCI is presented for MRI images. The proposed model is composed of two modules named feature mapping module and feature enhancement module. To adapt to the characteristics of medical images, a new feature mapping module that contains multi groups of feature down-sampling is designed to get the multi-scale features of the images without any additional feature selection. As a result, the proposed model can integrate multi-scale convolution features of the feature mapping module and abstract features of the feature enhancement module end-to-end when learning the AD diagnostic task. At the same time, the proposed model is a lightweight model whose complexity has been significantly simplified. To verify the validity of the proposed model, the ANDI-1 dataset was used in the relevant experiments. After 5-fold cross-validation, the proposed model has achieved the accuracy of 91.83% and 75.52% for the AD diagnostic task and MCI diagnostic task, respectively. The experimental results demonstrate that the proposed model could achieve better performance compared to other methods under the AD and MCI diagnostic tasks. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:Automatic text summarization schemes are indeed helpful for glancing briefly at the text document. With this motivation, we introduce here a two-stage hybrid model for text summarization task by utilizing the strength of various approaches. In the first step, we cluster the sentences of a document according to their similarity using a partitional clustering algorithm. We then use a linear combination of the normalized Google distance and word mover's distance to differentiate two sentences. The notion of gap statistics is exploited to approximate the number of partitions for the given document needed in the partitional clustering algorithm. We extract the significant sentences from each cluster (partition), which are recognized by their adjusted text feature scores, in the second step. The teaching-learning based optimization approach is used to find the optimal weights for the text features whereas a fuzzy inference system with a full-fledged knowledge base generated by humans is employed to determine the final score of the sentences. Moreover, we have also proposed an exact method to give a solution for the summarization problem by modeling it as an Integer Linear Programming (ILP) problem. We evaluate the proposed methods on three different datasets: DUC 2001, DUC 2002, and CNN. The observed results on these standard datasets manifest the efficacy of the proposed methods. We further show that partitioning a document in an optimal number of clusters plays a major role in content coverage in summaries. The performance of the proposed hybrid method shows that the combination of fuzzy, evolutionary, and clustering algorithms produces good summaries of the documents. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:With the increasing task complexity and environmental uncertainty, it is hard to achieve adaptability and robustness by manual design methods for multi-robot cooperation tasks. Automatic synthesis approaches with trial and error mechanisms are getting more and more attention. By encoding the strategies to be designed as "ideas'', the newly proposed Brain Storm Robotics (BSR) framework can obtain the sufficiently good solutions for particular tasks after a series of operations on the ideas. However, the original BSR only shows designing the rule base for a fuzzy controller. This paper proposes an automatic design approach for neural network-based strategies for robotic swarms with the BSR framework to realize cooperative behaviors. Two design cases are studied: one is the direct strategy search for a swarm aggregation behavior; the other is synthesizing a backpropagation neural network-based controller for coordinated formation control, which has both evolution and learning characteristics. The results show that the proposed method can automatically find control strategies with scalability for multi-robot cooperation, which has the potential for further development. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:The success of machine learning models over the last few years is mostly related to the significant progress of deep neural networks. These powerful and flexible models can even surpass human-level performance in tasks such as image recognition and strategy games. However, experts need to spend considerable time and resources to design the network structure. The demand for new architectures drives interest in automating this design process. Researchers have proposed new algorithms to address the neural architecture search (NAS) problem, including efforts to reduce the high computational cost of such methods. A common approach to improve efficiency is to reduce the search space with the help of expert knowledge, searching for cells rather than entire networks. Motivated by the faster convergence promoted by quantum-inspired evolutionary methods, the Q-NAS algorithm was proposed to address the NAS problem without relying on cell search. In this work, we consolidate Q-NAS, adding a new penalization feature, enhancing its retraining scheme, and also investigating more challenging search spaces than before. In CIFAR-10, we reached 93.85% of test accuracy in 67 GPU days, considering the addition of an early-stopping mechanism. We also applied Q-NAS to CIFAR-100, without modifying the parameters, and our best accuracy was 74.23%, which is comparable to ResNet164. The enhancements and results presented in this work show that Q-NAS can automatically generate network architectures that outperform hand-designed models for CIFAR-10 and CIFAR-100. Also, compared to other NAS methods, Q-NAS results are promising regarding the balance between performance, runtime efficiency, and automation. We believe that our results enrich the discussion on this balance, considering alternatives to the cell search approach. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:RNA-protein interactions (RPI) play a crucial role in foundational cellular physiological processes. Traditional methods to predict RPI are implemented through expensive and labor-intensive biological experiments, and existing computational methods are far from being satisfactory. There is a timely need for developing more cost-effective methods to predict RPI. A stacking ensemble deep learning-based framework (named RPI-MDLStack) is constructed for RPI prediction in this study. First, sequential-, physicochemical-, structural- and evolutionary-information from RNA and protein sequences are obtained through eight feature extraction methods. Then, the optimal feature is generated after eliminating the redundancy of the fusion features by the least absolute shrinkage and selection operator (LASSO). Based on the stacking strategy, the optimal feature is first learned by the base-classifier combination composed of multilayer perceptron (MLP), support vector machine (SVM), random forest (RF), gated recurrent unit (GRU), and deep neural networks (DNN). Finally, the prediction scores are fed into a discriminative model for further training. The results of 5-fold cross-validation test prove the superior identification of RPI-MDLStack with accuracy of 96.7%, 87.3%, 94.6%, 97.1% and 89.5% on RPI488, RPI369, RPI2241, RPI1807, and RPI1446, respectively. Additionally, RPI-MDLStack obtained the overall prediction accuracy of 97.8% in the independent tests trained on RPI488. Compared with other state-of-the-art RPI prediction methods using the same datasets, RPI-MDLStack shows more robust and stable for predicting RPI. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:In this paper, a multiagent system based cuckoo search optimization (MASCSO) algorithm is developed by combining a multiagent system (MAS) and cuckoo search optimization (CSO) to exploit the complementary nature of the MAS and CSO. The existing behavioral rules in MAS are modified to get improved convergence. The MASCSO algorithm is tested on benchmark single objective bounded constrained functions. Nonparametric statistical analysis is performed to validate the MASCSO algorithm against benchmark algorithms. The proposed MASCSO algorithm is applied to estimate parameters of photovoltaic (PV) cell and module using Lambert W-function (MASCSO(L)) and Direct (MASCSO(D)) current estimation approaches, respectively. The relative power error percentage at the maximum power point (%4PMPP) is proposed to justify the effectiveness of these parameter estimation techniques. The results indicated that parameters estimated from MASCSO(L) technique have lowered %4PMPP by 54.46% and 38.88%, respectively, for PV cell and module. (c) 2022 Elsevier B.V. All rights reserved.
Otchere, Daniel AsanteGanat, Tarek Omar ArbiNta, VanessaBrantson, Eric Thompson...
20页
查看更多>>摘要:Accurate net pay classification is essential in hydrocarbon resource volumetric calculation. However, there is no universal methodology developed for its evaluation hence the existence of many incongruent views on its application since it is data-driven and differs for each reservoir. This research incorporates machine learning and data analytics in predicting net pay, intending to reduce uncertainties associated with the net-pay classification. Log analysis was performed to determine the cut-offs for sonic, neutron, density, and gamma-ray using unsupervised learning and data analytics. The log cut-offs were calculated with petrophysical properties; shale volume, water saturation, permeability, and porosity. A Bayesian Optimised Extreme Gradient Boosting (Bayes Opt-XGBoost) model was applied to predict the petrophysical properties using five wireline logs. The model's performance and a computational function in classifying net reservoir resulted in an accuracy of 0.93, a combined precision of 0.94, a combined recall of 0.92, and a combined F1-score of 0.93. The model and methodology were deployed on a new well for validation. The classification of net reservoir zones via the proposed data analytics method, Bayes Opt-XGBoost predicted petrophysical properties, and computational function code matched mobility drawdown test data for the well. These results indicate that the developed methodology and machine learning model can work for other reservoirs since the additional computational function code can be manipulated for any data-driven estimated cut-offs. This developed approach can determine net reservoir and net pay zones in any sandstone reservoir. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:Design of Water Distribution Networks (WDNs) is a tremendously hard optimization problem, and consideration of reliability further adds to the complexity, which may involve huge computational effort. As several past studies stressed that Evolutionary Algorithms (EAs) could be efficient tools for WDN design, so this study, presents an effective methodology, Multi-Objective Self Adaptive Differential Evolution (MOSADE) algorithm using Sobol sequences for random number generation, termed as (S-MOSADE) for WDN design. The efficacy of the S-MOSADE framework is evaluated by application on a few benchmark WDNs, by considering cost minimization and mechanical reliability maximization and comparing the results with that of NSGA-II algorithm. The results illustrated that S-MOSADE algorithm leads to a better Pareto-optimal front than NSGA-II with respect to uniformly spaced and wide range of non-dominated solutions, and converges faster as compared to other algorithms. To further reduce the computational burden, minimizing the cost and maximizing the network resilience is carried out to generate the initial population for S-MOSADE algorithm. This has reduced the computational burden by almost three times as compared to random initialization of population, thus saving a lot of computational time. The study concludes that the proposed SMOSADE algorithm with the strategy of solutions initialized with minimum cost and maximum network resilience could be used effectively for speeding-up the multi-objective design of WDNs. (C) 2022 Published by Elsevier B.V.