首页期刊导航|Applied Soft Computing
期刊信息/Journal information
Applied Soft Computing
Elsevier Science, B.V.
Applied Soft Computing

Elsevier Science, B.V.

1568-4946

Applied Soft Computing/Journal Applied Soft ComputingEIISTPSCIAHCI
正式出版
收录年代

    Bandwidth estimation in high mobility scenarios of MANET using NSGA-II optimized fuzzy inference system

    Pal P.Tripathi S.Kumar C.
    18页
    查看更多>>摘要:Integration of mobility to sensor node enhanced the scalability of Ad-hoc networks. Bandwidth estimation in scenarios where nodes move randomly and frequently changes its link may depend on factors other than congestion in network traffic. The event of link failure due to the high mobility behavior of nodes in the path from source to destination is one of the dominating cause. However, in both cases, TCP's new-Reno and its variants reduce the congestion-window size to half or 1-MSS, which can cause bandwidth underutilization in the event of a packet drop due to link failure. The proposed approach distinguishes the link failure from network congestion and estimates bandwidth based on link and path stability matrix. The contribution includes the identification of node mobility fuzziness in unicast and broadcast scenarios of IEEE 802.15.4 based MANET. The mobility-fuzziness formulation by the proposed NSGA-II optimized fuzzy-inference-system imitates the node's mobility behavior and, in the event of packet loss, is employed to realize the path-stability metric to estimate the congestion window size. The thorough evaluation with current state-of-the-art techniques shows that the proposed path stability metric allows the estimation of congestion window size to the current congestion window in the event of a packet loss due to link failure. Improving the bandwidth utilization metric in high mobility scenarios from 53–61% for existing approaches to above 83% in proposed approach.

    Multi-strategy ensemble firefly algorithm with equilibrium of convergence and diversity

    Zhao J.Chen D.Wang H.Xiao R....
    14页
    查看更多>>摘要:Balancing the convergence and diversity in the multi-objective firefly algorithm is essential for obtaining high precision and well distributed Pareto front. However, most existing algorithms cannot? guarantee such balance, leading to a poor comprehensive performance. To address this limitation, this paper proposes a multi-strategy ensemble firefly algorithm with equilibrium of convergence and diversity (MEFA-CD). Firstly, an improved linear congruence method is used to generate the initial population with uniform distribution, to provide a good start for the subsequent population evolution and ensure the global search ability; Secondly, a hybrid learning strategy is utilized to identify the best elite solution according to the maximum fitness value. Combined with the current best solution, the firefly is guided to learn under the effect of compensation factor. On the one hand, it breaks through the population constraints, which yields a faster convergence to the Pareto optimal solution set. On the other hand, it expands the search range of the population, which improves the diversity and the accuracy of the Pareto optimal set; Finally, the crowding distance mechanism is used to delete the aggregation solution, which maintains the diversity of external files and ensures the local development ability of the population, and further improves the convergence of the algorithm. Experimental results show that, compared with other multi-objective optimization algorithms, the proposed algorithm has better performance in convergence and diversity, among which the optimization performance is improved by 61% compared with the standard MOFA.

    A multivariate EMD-LSTM model aided with Time Dependent Intrinsic Cross-Correlation for monthly rainfall prediction

    Johny K.Pai M.L.S. A.
    18页
    查看更多>>摘要:Accurate prediction of rainfall is a complex problem because of the large number of controlling factors, complex interrelationships between them and the multiscaling behaviour of the process. Because of the multiscaling behaviour of the problem, the use of hybrid modelling involving decomposition techniques is preferable over strenuous physical models and standalone driven techniques for accurate rainfall predictions. This study proposes a novel hybrid modelling framework integrating Long Short Term Memory (LSTM) and Multivariate Empirical Mode Decomposition (MEMD) aided with Time Dependent Intrinsic Cross-Correlation (TDICC) analysis algorithm for monthly rainfall predictions. The application of the proposed model is demonstrated for monthly rainfall prediction of 2005–2015 period at All India spatial domain considering El-Ni?o Southern Oscillation(ENSO), Indian Ocean Dipole(IOD) and five antecedent values of rainfall are the input variables. The proposed framework first uses MEMD to obtain a set of orthogonal components namely Intrinsic Mode Functions (IMFs) identifying and aligning the common scales embedded in the multiple input variables considered. Subsequently, scale specific rainfall information are predicted by incorporating them on relevant IMFs and their significant lags identified through TDIC and TDICC analyses. Final aggregation of predicted rainfall components from different scales gives the monthly rainfall of a generic time step ahead. The efficacy of the proposed MEMD-TDICC-LSTM framework is compared with five other hybrid models such as MEMD-TDIC-ANN, MEMD-TDICC-ANN, MEMD-ACO-ANN, MEMD-ACO-LSTM and MEMD-TDIC-LSTM, which used Time Dependent Intrinsic Correlation (TDIC) and Ant Colony Optimization (ACO) for predictor selection and Artificial Neural Network (ANN) as modelling tool. The study has used different graphical representations and ten different statistical performance evaluation measures for the prediction of validation period, it is observed that the proposed model could achieve a predictive skill of 0.98, Nash-SutcliffeEfficiency (NSE) of 0.95 and Index of Agreement (IA) of 0.91 which is better than all the five remaining models. The capability of MEMD-TDICC-LSTM model to predict the extremes is also confirmed by using the bar graph for the drought year rainfall of 2009 and is observed that the model is successful in capturing the extremes with an annual rainfall 975.16 mm closer to observed value 927.3 mm. MEMD algorithm facilitates the inclusion of multiple large scale climatic oscillations as inputs and their multi time scale decomposition, TDICC helps to fix the relevant inputs at different time scales and the LSTM functions as the robust modelling tool. Further, the framework using this specific combination resulted in substantial reduction in modelling complexity and faster execution, as the approach considers only the most relevant and significant inputs in the process.

    End-to-end table structure recognition and extraction in heterogeneous documents

    Kashinath T.Jain T.Agrawal Y.Anand T....
    12页
    查看更多>>摘要:Automatically detecting and parsing tables into an indexable and searchable format is an important problem in document digitization. It relates to computer vision, machine learning, and optical character recognition. This paper presents a simple model based on a deep neural network architecture that combines recent advances in computer vision and machine learning, which can be used to detect and convert a table into a format that can be edited or searched. The motivation for this work is to develop a sound method to extract the vast data set of knowledge available in physical documents such that it can be used to develop data-driven tools that can be used to support decisions in fields such as healthcare and finance. The model uses a Yolo-based object detector trained to maximize the Intersection over Union of the detected table regions within the document image and a novel OCR-based algorithm to parse the table from each table detected in the document.Past works have all focused on documents and images containing a level and even tables. This paper aims to present our findings after the model is run on a set of skewed image datasets. Experiments on the Marmot and Publaynet datasets show that the proposed method is entirely accurate and can generalize different tables formats. At an Intersection over the Union threshold of 50%, we achieve a mean Average Precision (mAP) of 98% and an average IoU of 88.81% on the PubLayNet dataset. With the same IoU threshold, we achieve an mAP of 95.07% and an average IoU of 75.57% on the Marmot dataset.

    Data-based variable universe adaptive fuzzy controller with self-tuning parameters

    Jin Y.Cao W.Wu M.Yuan Y....
    10页
    查看更多>>摘要:Data-based variable universe adaptive fuzzy controllers (VUAFCs) with self-tuning parameters are developed for complex systems with unknown universes in this paper. The main feature of the proposed VUAFC is the new defined contraction–expansion (C-E) factor on an infinite universe, which is more flexible and practical in real applications. Moreover, the data-based methods to tune the parameters including the peak points of output fuzzy subsets and the C-E factor parameters are proposed. The peak points of the output fuzzy subsets are mined by an improved Wang–Mendel method based on conflicting rules. The parameters of the VUAFC are optimized by solving an offline optimization problem using the chaotic particle swarm optimization (CPSO) algorithm. The simulation results on the strip temperature control of the radiant-tube indirect-fired furnace of annealing furnace show that our proposed method has strong practicability and good control performance.

    Optimization of an auto drum fashioned brake using the elite opposition-based learning and chaotic k-best gravitational search strategy based grey wolf optimizer algorithm

    Yuan Y.Zhao Y.Mu X.Shao X....
    11页
    查看更多>>摘要:Highly non-linear optimization problems are widely found in many real-world engineering applications. To tackle these problems, a novel assisted optimization strategy, named elite opposition-based learning and chaotic k-best gravitational search strategy (EOCS), is proposed for the grey wolf optimizer (GWO) algorithm. In the EOCS based grey wolf optimizer (EOCSGWO) algorithm, the elite opposition-based learning strategy (EOBLS) is proposed to take full advantage of better-performing particles for optimization in the next generations. A chaotic k-best gravitational search strategy (CKGSS) is proposed to obtain the adaptive step to improve the global exploratory ability. The performance of the EOCSGWO is verified and compared with those of other seven meta-heuristic optimization algorithms using ten popular benchmark functions. Results show that the EOCSGWO is more competitive in accuracy and robustness, and obtains the first in ranking among the six optimization algorithms. Further, the EOCSGWO is employed to optimize the design of an auto drum fashioned brake. The results show that the braking efficiency factor can be improved by 28.412% compared with the initial design.

    Enhanced COVID-19 X-ray image preprocessing schema using type-2 neutrosophic set

    Abdel-Basset M.Mostafa N.N.Sallam K.M.Elgendi I....
    13页
    查看更多>>摘要:In this study, we introduce a new medical image enhancement approach depending on a type-2 neutrosophic set (T2NS) and α-mean and β- enhancement operations. This new approach obtains a good enhancement result by defining the uncertainties within the image in a six-degree membership. To show the real case study of this proposed technique, a novel enhancement approach for COVID-19 in X-ray is introduced. The X-ray image suffers from poor contrast and inconsistencies in its gray levels. The proposed method tackles this issue by obtaining a neutrosophic domain for gray level images depending on six membership functions. Through enhancement operations, T2NS entropy is obtained to evaluate the change in the gray level of X-ray images. The proposed approach can improve chest X-ray images by reducing the entropy values to minimize the uncertainty within the image. An image de-neutrosophication operation is obtained on the enhanced images to convert them from the neutrosophic set (NS) domain to the grayscale image. Finally, output images are compared with the enhanced images achieved under a single-valued neutrosophic set (SVNS) domain.

    GENEmops: Supervised feature selection from high dimensional biomedical dataset

    Agarwalla P.Mukhopadhyay S.
    20页
    查看更多>>摘要:Identification of differentially expressed genes, lying beneath the carcinogenic expression, is still very crucial for accurate detection and treatment of the disease. The challenge of a large number of attributes compared to a small number of instances and the prediction of highly discriminative genes requires an effective method. It can be regarded as a multi-objective problem that involves minimization of the number of selected genes and maximization of the classification performance. It is expected to find the optimal count of the most significant genes which are strongly associated with the classification of cancer. In this paper, we have proposed a framework entitled as GENEmops for gene selection and subsequent classification of the disease. The core of the GENEmops is inspired by multi-objective player selection strategy based hybrid population search (MOPS-HPS). The proposed system uses a multi-filtering and adaptive parameter tuning approach for gene selection. A new graded rotational blending operator is introduced to enhance the exploitation capability of the hybrid wrapper based scheme. Unlike the most current existing methods which employ some strategy to transmute the continuous search space to binary search space, it uses an adaptive way for binary conversion which is stochastically updated during the search phase. GENEmops also improves the performance of the classifier by tuning its parameters. The efficiency of the proposed GENEmops is tested on sixteen biological datasets (eight binary class and eight multi-class) and compared with state-of-the-art computationally intelligent multi-objective approaches. Experimental results demonstrate the efficiency of the proposed work.

    A correlation guided genetic algorithm and its application to feature selection

    Zhou J.Hua Z.
    12页
    查看更多>>摘要:Traditional feature selection methods based on genetic algorithms randomly evolve using unguided crossover operators and mutation operators. This leads to many inferior solutions being generated and verified using costly fitness functions. In this paper, we propose a new feature selection method based on a correlation-guided genetic algorithm. It first roughly checks the quality of the potential solutions to reduce the possibility of producing inferior solutions. Then more potentially superior solutions can be verified by the classifier to improve the efficiency of the evolutionary process. It is theoretically proven that the proposed method converges to the optimal solution with a very weak precondition. Numerical results on 4 artificial datasets and 6 real datasets show that compared with other existing methods, the proposed method is a competitive feature selection method with higher classification accuracy and a more efficient evolutionary process.

    Explainable artificial intelligence-based edge fuzzy images for COVID-19 detection and identification

    Magaia N.de Albuquerque V.H.C.Hu Q.Gois F.N.B....
    19页
    查看更多>>摘要:The COVID-19 pandemic continues to wreak havoc on the world's population's health and well-being. Successful screening of infected patients is a critical step in the fight against it, with radiology examination using chest radiography being one of the most important screening methods. For the definitive diagnosis of COVID-19 disease, reverse-transcriptase polymerase chain reaction remains the gold standard. Currently available lab tests may not be able to detect all infected individuals; new screening methods are required. We propose a Multi-Input Transfer Learning COVID-Net fuzzy convolutional neural network to detect COVID-19 instances from torso X-ray, motivated by the latter and the open-source efforts in this research area. Furthermore, we use an explainability method to investigate several Convolutional Networks COVID-Net forecasts in an effort to not only gain deeper insights into critical factors associated with COVID-19 instances, but also to aid clinicians in improving screening. We show that using transfer learning and pre-trained models, we can detect it with a high degree of accuracy. Using X-ray images, we chose four neural networks to predict its probability. Finally, in order to achieve better results, we considered various methods to verify the techniques proposed here. As a result, we were able to create a model with an AUC of 1.0 and accuracy, precision, and recall of 0.97. The model was quantized for use in Internet of Things devices and maintained a 0.95 percent accuracy.