Anurag Kumar JhaAparna RajAshish Kumar JhaSujala D. Shetty...
471-485页
查看更多>>摘要:Abstract Supply chain management and Hyperledger are two interconnected domains. They leverage blockchain technology to enhance efficiency, transparency, and security in supply chain operations. Together, they provide a decentralized, traceable, and real-time platform for recording and managing transactions. This combination is particularly valuable for industries dealing with sensitive goods, as it provides accurate traceability and real-time information. This paper explores the integration of supply chain management with Hyperledger blockchain technology to enhance efficiency, transparency, and security in supply chain operations. We propose a decentralized Hyperledger Fabric blockchain network to improve traceability, security, and efficiency by monitoring environmental conditions. This approach is particularly beneficial for transporting sensitive goods, such as medical supplies and perishable items, by ensuring optimal conditions and real-time data accessibility. The integration of Artificial Intelligence (AI) further enhances insights, reduces waste, and improves overall efficiency. By utilizing a distributed network free from third-party intermediaries, the system ensures immutability and remote accessibility, addressing challenges related to transporting heat and humidity sensitive products. Our experimental assessment demonstrates the benefits of private blockchain technologies, including enhanced security, regulatory compliance, compatibility, flexibility, and scalability. This study presents a detailed methodology for developing a traceable, efficient, and sustainable agricultural supply chain.
Mehdi KhasheiFatemeh ChahkoutahiAli Zeinal Hamadani
487-503页
查看更多>>摘要:Abstract Deep learning is a highly popular and effective classification technique capable of handling complex patterns. In recent years, many researchers have focused on enhancing the performance of both shallow and deep intelligent classifiers. Among the various methodologies developed for this purpose, the reliable and jumping modeling techniques have shown great promise in improving the accuracy of diverse classifiers with different characteristics. In this paper, a reliable jumping-based deep learning (RJDL) approach is proposed that simultaneously leverages the advantages of these methodologies to enhance the classification performance of deep learning classifiers. In the first stage of the RJDL classifier, the jumping-based methodology is first applied to the cost/loss function of the conventional deep learning classifiers. This transformation converts the continuous feasible set into a discrete one, enabling the deep learning model to jump between different possible points. In the second stage, the reliable-based methodology is then employed on the jumping-based cost/loss function obtained from the previous stage. This approach helps to estimate the discrete connection weights in a manner that the frequency of jumping is minimized. To evaluate the performance of the proposed RJDL methodology, seven benchmark data sets related to the transportation is considered. Additionally, for the implementation of the proposed methodology, the deep feed-forward neural networks (DNNs) is selected. Empirical results of the reliable jumping-based deep feed-forward neural network (RJDFNN) demonstrate that the proposed classifier consistently yields more accurate outcomes compared to the conventional deep feed-forward neural network. On average, the RJDFNN classifier achieves a classification rate of 89.87%, which is 6.65% higher than the classic deep feed-forward neural network.
查看更多>>摘要:Abstract The building industry, which is a major contributor to greenhouse gas emissions, is under increasing pressure due to growing worries over the impact of climate change on communities. Geopolymer concrete (GPC) has emerged as a feasible alternative for construction materials owing to the environmental concerns linked to cement manufacture. The findings of this study contribute to the advancement of machine learning methods for determining the properties of environmentally friendly concrete. This kind of concrete has the potential to replace traditional concrete and reduce carbon dioxide emissions in the building industry. In the current study, when ground granulated blast-furnace slag (GGBS) is substituted with natural zeolite (NZ), silica fume (SF), and varying NaOH concentrations, the compressive strength (fc) of GPC is estimated using integrated analysis. A complete compilation of experimental testing conducted on GPC specimens was gathered from various sources, resulting in a total of 254 data sets. For this aim, support vector regression (SVR) analysis was considered, integrated with Arithmetic optimization algorithm (AOA) (ASVR), Bald eagle search optimization algorithm (BESO) (BSVR) and Henry Gas Solubility Optimization (HGSO) (HSVR). Along with this, the Multivariate adaptive regression splines (MARS) method was developed for determining an equation between inputs and output. The integration of these methods led to significant enhancements in predictive accuracy, surpassing existing models. Notably, the BSVR approach demonstrated remarkable improvements in precision and consistency, outperforming other frameworks across statistical metrics, error distribution, and Taylor diagram analysis. This study marks a substantial advancement in machine learning-driven optimization for sustainable concrete, with BSVR proving to be the most reliable and effective model for predicting the compressive strength of GPC. These improvements offer valuable insights for further reducing environmental impacts in the construction industry.
查看更多>>摘要:Abstract Detection of consumed alcohol in persons is quite a tough job as the convention devices based on odor are sometimes unreliable. Electroencephalography is a technique normally applied to measure the electrical activity of the brain; however, it proved to be useful for evaluation of subjects with alcohol addiction also. This paper presents a new automatic alcoholism detection system using a deep learning technique with a Convolution neural network (CNN) architecture of four convolutional layers with EEG connectivity. Preprocessing of the EEGs was therefore done to reduce the artifacts and noise. Then, the functional connectivity in the time-frequency domain was calculated from pre-processed EEGs of both non-alcoholic and alcoholic EEGs. EEG connectivity was measured in the frequency range from 1 to 45 Hz, with steps of 1 Hz. Subsequently, the 64 × 64 connectivity matrices were used as input for a deep learning model consisting of four convolutional layers through a CNN architecture. The experimental outcomes demonstrated that, for the EEG categorization of non-alcoholics/alcoholics, the proposed method performed a mean categorization accuracy of 99.51%, sensitivity of 99.68%, and specificity of 99.36% for the UCI-ML EEG database. This could have the potential to yield an excellent performance compared with current EEG methods for diagnosing alcoholism. The obtained outcomes have proven that the proposed CNN architecture with four convolution layers is suitable and efficient for clinical usage for alcohol addiction diagnosis. This approach should be further validated with other alcohol-dependence EEG databases.
查看更多>>摘要:Abstract At present, brain tumors are the world’s most deadly disease. The brain may be impacted by tumors that cause harm to healthy brain tissue or increase intracranial pressure. Therefore, tumor cell’s rapid growth may be fatal. In the realm of medical image analysis, brain tumor identification is a crucial challenge since timely and precise diagnosis is essential for treatment planning and patient care. Automating the diagnosis of brain tumors using medical images, such as MRI scans, has shown considerable potential thanks to deep learning algorithms. Image segmentation is required for the diagnosis of brain malignancies. Tumor detection involves complex stages that need the identification of two distinct locations in brain tumor images. In this work, we propose a method for brain tumor detection using a combination of image preprocessing techniques and the ResNet50 deep learning model. MRI images are first preprocessed by normalizing pixel intensities, resizing for uniformity, sharpening edges to highlight tumor boundaries, and enhancing abnormal areas through intensity deviation. The images are then divided into smaller slices for more focused analysis. These processed images are used to train and test the ResNet50 model, leading to improved accuracy in identifying brain tumors. In this work, we achieved an accuracy of 99.25%, sensitivity of 99.36%, specificity of 98.78%, precision of 99.12%, F1 score of 99.05%, and recall of 99.79%. Compared to existing approaches, it can identify the tumor more precisely and with less processing time.
查看更多>>摘要:Abstract Nowadays, because of the current world’s conditions, especially in terms of energy and economic security, environmental problems, and greenhouse gas in the climate change sector, decision-makers are forced to concentrate on sustainable development, particularly in the energy efficiency sector. Buildings are a critical source of energy consumption and greenhouse gas production, so estimating their energy usage is crucial in decreasing their impacts. This article intends to apply a Random Forest (RF) ensemble classifier, a frequently-used machine learning algorithm that reaches singular results for multiple decision trees for estimating building heating load. Artificial Rabbits Optimization (ARO) and Electric Charged Particles Optimization (ECPO) are used to boost accuracy and reduce total loss when estimating heating load. The study provides insight into building heating load predictions. It proposes the RFEC (Random Forest optimized with electrically charged particle optimization) model as the most efficient way to achieve optimized energy consumption with a maximum coefficient of determination of 0.994 and root mean square error of 0.776. According to the obtained values for R2 and RMSE, which are 0.974 and 1.481, respectively, for the simple RF model, when comparing them to the mentioned values, it is clear that by optimizing RF by ECPO, the R2 value has increased by approximately 1.5%, and the error rate has decreased by almost 91%.
查看更多>>摘要:Abstract This research aims to create an image registration system specific to art design, employing an upgraded version of the Speeded-Up Robust Features algorithm, known as Gradient Speeded-Up Robust Features. Optimizing computational efficiency during the processing of images, particularly for real-time analysis in scenarios in art design, is its key purpose. In its proposed algorithm, traditional rectangular templates have been replaced with circular ones, and a significant drop in computational intensity and improvements in feature detection and matching capabilities have been seen. Experimental results reveal a 25% drop in computation time and a 15% boost in correct matching when contrasted with the traditional Speeded-Up Robust Features algorithm. In addition, its average processing time for processing an image has been reduced by 1.2 s, and therefore, it is particularly ideal for use in scenarios such as artwork installations, multimedia, and augmented reality environments. This work puts into prominence the growing role of computational approaches in art design and raises demand for continued improvements in image processing technology. The theory proposed in this work forms a basis for combining technology with registrations in images in art design and promotes innovation in works of digital and interactive artwork. Overall, these findings present avenues for improvements in even sophisticated processing in picture processing systems utilized in scenarios in art design.
查看更多>>摘要:Abstract Noisy labels that often exist in training datasets are attributed to high bias and variance in the prediction outputs of machine learning models. A novel approach (touted as IEL, Iterative Ensemble Learning) for simultaneously suppressing bias and variance of the prediction output is presented. An ensemble of base models is trained directly with the noisy dataset (i.e., containing up to 40% of mislabeled data). IEL can identify and remove outliers, mislabels, and confusing data within the dataset. Supervised learning is used to iteratively filter out abnormal data without assuming a specific data distribution or requiring true labels of out-of-sample. Using classification as a demonstrative task, different types of anomaly data can be identified with the prediction output distribution of the base models. Implementation of IEL is highly flexible in that it does not limit the choice of the base models under different configurations, we tested fully connected neural networks, AlexNet, ResNet50, and GoogleNet with various benchmark datasets, and the results show that IEL outperforms state-of-the-art cleaning methods.
查看更多>>摘要:Abstract The distance measure is a great resource to analyse the disparity of Fermatean fuzzy set (FFSs), which significantly deals the ambiguity. Although prior research offered plenty of distance measurements for FFSs, most of those fail to satisfy the distance constraints or build contradictions. Therefore, in this investigation, we offer a unique distance measure made up of an aspect of FFS. In the context of consistency among the two FFSs, the new distance measure is formulated using the fundamental function that comprises the membership, non-membership, and hesitation grades. It indicates that the recommended measure possesses the features of the formal meaning of the distance measure. A comparison with prior measures of distance to the presented distance measure reveals that the offered measure of distance does not yield any adverse events. Moreover, research investigations on multi-criteria decision-making (MCDM), image fusion and image segmentation challenges confirm the suggested distance measure’s practical usefulness.
查看更多>>摘要:Abstract Chronic obstructive pulmonary disease (COPD) affects the health of millions of people worldwide. In this regard, this research tried to present a concept showing that optimized ML models can predict COPD based on the Exasen dataset. Further, the stochastic gamma process was explained as a continuous-time model with gamma-distributed increments and the compound Poisson process for modeling random jumps in Poisson events because of their relevance to modeling irregular patterns. The two algorithms that will be used in this study are extreme Gradient Boosting Classification, or XGBC, and CAT Boost Classification, CAT, both enhanced by the Artificial Rabbit Optimizer, or ARO, for hyperparameter tuning. The performances were measured concerning accuracy, Precision, Recall, and F1 score for COPD prediction. Surprisingly, both optimized models carried out excellent performance in COPD prediction; especially, the XGAR reached more than 0.910 accuracy in the training phase. Each model has a characteristic. Though XGBC yielded slightly high accuracy, the computational resource can be huge. In comparison, CAT had competitive results with XGBC, maybe with faster training times. The results here suggest that optimized XGBC and CAT are promising for COPD prediction on the Exasen dataset. Further studies will be required to confirm these results, especially for clinical applicability and generalizability across various populations. The contribution of this study involves the new application of ARO for the hyperparameter tuning of COPD prediction models with significant enhancements in the accuracies and performances of both XGBC and CAT algorithms on the Exasen dataset. ARO provides enhanced predictive capability by optimizing the critical model parameters and hence has the potential to enhance the effectiveness of ML in medical diagnostics. This work underlines the prospect of ML models with advanced optimization techniques for the betterment of COPD diagnosis, hence helping in its management and personalized treatment.