查看更多>>摘要:This paper proposes an integrated framework for a deep neural network to estimate the remaining useful life (RUL) to ensure the reliability and safety of complex mechanical systems and enable proactive maintenance for intelligent operation. This data-driven method can predict complex and highly nonlinear degradation characteristics that are difficult to predict using physics-based prognostics and health management. In particular, this study focused on feature preprocessing and hyperparameter optimization, whereas previous studies had focused on the neural network architecture to improve prediction accuracy and robustness. The proposed integrated framework comprises four phases: feature preprocessing, feature reasoning using a deep neural network, hyperparameter optimization using a genetic algorithm, and RUL estimation. In the first phase, sensor measurements sensitive to degradation are selected and separated into primary and dynamic degradation trends. In addition, step differential values are extracted to account for multiple operational modes using an unsupervised clustering method. In the second phase, feature reasoning is performed using a deep neural network to characterize hidden complex and highly nonlinear degradation features. The health indicators manipulated in the first phase are trained using the proposed deep neural network. In the third phase, a genetic algorithm is introduced to optimize the hyperparameters used in feature preprocessing and reasoning. The final phase estimates the RUL using the proposed deep neural network with optimized hyperparameters. The proposed method was validated on the C-MAPSS dataset. The results show that the proposed integrated framework outperformed other state-of-the-art machine learning and deep learning methods under different operational conditions, suggesting that efficient feature preprocessing and hyperparameter optimization significantly improve the prediction accuracy and robustness of RUL for data-driven prognostics and health management. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:From early 2020, a novel coronavirus disease pneumonia has shown a global "pandemic" trend at an extremely fast speed. Due to the magnitude of its harm, it has become a major global public health event. In the face of dramatic increase in the number of patients with COVID-19, the need for quick diagnosis of suspected cases has become particularly critical. Therefore, this paper constructs a fuzzy classifier, which aims to detect infected subjects by observing and analyzing the CT images of suspected patients. Firstly, a deep learning algorithm is used to extract the low-level features of CT images in the COVID-CT dataset. Subsequently, we analyze the extracted feature information with attribute reduction algorithm to obtain features with high recognition. Then, some key features are selected as the input for the fuzzy diagnosis model to the training model. Finally, several images in the dataset are used as the test set to test the trained fuzzy classifier. The obtained accuracy rate is 94.2%, and the F1-score is 93.8%. Experimental results show that, compared with the deep learning diagnosis methods widely used in medical image analysis, the proposed fuzzy model improves the accuracy and efficiency of diagnosis, which consequently helps to curb the spread of COVID-19. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:The detected result of diatoms is an important indicator in forensic drowning examination, and most of the current deep learning methods have achieved greater success in detecting diatoms with simple or no backgrounds. However, diatom images captured by the high-definition electron scanning microscopy in modern forensic science contain complex backgrounds and hamper the accurate diatom detection, resulting in the omission detection of the small and marginal diatoms in multi-diatom scenario. In this paper, we proposed a Hybrid-Dilated-Convolution-incorporated Single Shot Multibox Detector (HDC-SSD) to address this problem. By adopting the merit of the plump receptive field of HDC, the proposed algorithm not only improves the detection rate but also enhances the detection ability of the small objects and the marginal objects. The proposed method was validated by using our self-established dataset. Compared with SSD, the HDC-SSD reduces the undetected rate by approximately 48.6% and almost keeps as fast as the SSD. More importantly, compared with some current state-of-the-art methods, the HDC-SSD obtains the highest Recall value at 0.9302. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:Using multimodal fusion method to deal with emotion recognition task has become a trend. The fusion vector can more comprehensively reflect the subject's emotional change state, so as to obtain a more accurate emotion recognition effect. However, different fusion input or feature fusion methods have different effects on the final fusion results. In this paper, we propose a subjective and objective feature fused neural network model (SOFNN) for emotion recognition, which can effectively learn spatial-temporal information from EEG signals and dynamically integrate EEG signals with eye movement signals. Specifically, we extract more abundant spatial and temporal information from the original EEG signal through a series of 1-D convolution kernels of different sizes and we verify the effectiveness of the extracted features through experiments. The size of the 1-D convolution kernels is determined by the characteristics (such as sampling rate and number of channels) of the original EEG signal. Then, we design a subjective and objective feature fusion framework to adjust the proportion of the two features through the dynamic learning of the weight vector, so as to fully exploit their respective advantages. We evaluate the performance of our model on the SEED-IV dataset, which is a common dataset. For the recognition task of four emotions (happy, sad, fear and neutral), our model achieves 86.27% accuracy and 10.16% standard deviation, which are better than the existing methods. In addition, we design a variety of ablation experiments to verify the effectiveness of each module in our model. The experiment results show that our model can make better use of the complementary relationship between subjective and objective features, which can achieve better emotion recognition effect. (C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:Based on picture fuzzy set theory, picture fuzzy clustering has achieved good results on some data as more information is involved in the clustering process. However, current picture fuzzy clustering methods still suffer from two common weaknesses, i.e., the sensitivity to outliers and the neglect of the uncertainty caused by different fuzzy degrees, which influence their performance in practical applications like medical image segmentation. To solve these issues, we present two new picture fuzzy clustering methods in this paper. First, to improve immunity to outliers, we propose an outlier-robust picture fuzzy clustering method named ORPFC by using a robust distance measurement, which treats the data objects far away from cluster prototypes as outliers and limits their effects on the prototype update. Second, to handle the uncertainty caused by fuzzy degrees, we further present an interval type-2 enhanced method called IT2ORPFC by incorporating the interval type-2 fuzzy set theory into ORPFC. In each iteration, IT2ORPFC estimates positive memberships, neutral memberships, and refusal memberships according to different fuzzification coefficients and then conducts type reduction for reliable type-1 clustering results. In the experiments, the proposed methods obtain robust and reliable results on eleven datasets. Specifically, ORPFC and IT2ORPFC achieve rewarding performance on segmenting medical images with noise.(C) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:Bearings are one of the most critical components in rotating machinery. Since the failures of bearings will cause unexpected machine damages, it is significant to timely and accurately recognize the defects in bearings. However, due to the nonlinear and nonstationary property of vibration signals, it is still a challenging problem to implement feature extraction and fault diagnosis based on vibration signals As a representative deep neural network (DNN), convolutional neural network (CNN) has been widely used for feature learning of vibration signals for machinery fault diagnosis. Due to the hierarchical structure of CNN, multi-level features will be generated by the layer-by-layer convolutional calculation in the deep network. Thus, it is interesting to select the layer-by-layer features in a concatenation layer for multi-level features fusion. In this paper, a novel CNN, multi-level features fusion network (MLFNet) is proposed for feature learning of vibration signals. Firstly, a multi-scale convolution is developed in MLFNet, where multi-branches with different kernel sizes are utilized to extract fault-related features. Secondly, the features at different layers are coupled by a concatenation layer to preserve discriminate information. Thirdly, an adaptive weighted selection based on dynamic feature selection is proposed for multi-level feature fusion. The effectiveness of MLFNet for machinery fault diagnosis is verified on two bearing test-beds. The experimental results demonstrate that MLFNet has good performance of feature extraction on vibration signals. MLFNet obtained the recognition accuracy of 99.75% for case 1 (single condition) and case 2 (varying condition). It has a better performance on bearing fault diagnosis in comparison with these typical DNNs and the state-of-art methods. (c) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:This paper aims to study two major research problems for 3D human pose estimation using depth data. First we seek an effective way for applying the RGB pre-trained 2D CNN model to 3D pose field, so as to transfer large-scale RGB annotation information to depth domain. In particular, we proposed a crossmodality CNN training strategy, where the key idea is to set a partial Batch Normalization (BN) layer within the RGB pre-trained 2D CNN model to weaken the distribution divergence between the RGB and depth data during training. To involve richer 3D descriptive cues, the raw depth data is appended with the normal vector map. Albeit coarse-to-fine human pose estimation with local refinement is helpful to enhance performance. While the way for setting the optimal local observation scale is not well addressed. Towards this crucial problem, we propose to fuse the multi-scale local information jointly. A multi-scale local refinement network is proposed accordingly, where the small local region focuses on capturing the fine information. On the other hand, the large local region contains richer semantic contextual information. The experiments on two 3D human pose estimation datasets with depth data verify the effectiveness and real-time running capacity of our proposition. (c) 2022 Elsevier B.V. All rights reserved.