Galvis, LeonardoOffermans, TimBertinetto, Carlo G.Carnoli, Andrea...
10页
查看更多>>摘要:Quality by Design (QbD) is a popular formal approach for designing, upscaling and optimizing industrial production facilities towards guaranteed quality. To avoid the many costly experiments required for QbD, historical production data may be exploited for optimization instead in what is known as a retrospective QbD (rQbD) study. Current rQbD literature does limitedly discuss data-driven identification of Critical Process Parameters (CPPs) to optimize limited process knowledge availability, and does not cover situations where technical operation limits have not yet been fully explored and/or where parallel equipment (lines) are used. This work presents a new rQbD strategy that addresses these challenges by balancing knowledge that can be obtained from statistical analysis of historical data, together with process experts with a carefully designed set of plant-scale experiments within current operational limits. This novel strategy is demonstrated on a long-running industrial lactose production facility. By digitally and experimentally exploring historical operation variability, we found new operational regimes for this production that may lead to up to 7% product quality improvement, reduced energy consumption and increased process under-standing. Although optimizing a specific process by necessity requires a process-specific approach, the way in which we systematically optimize the current process with Hybrid AI (combining available knowledge with new insights from historical data) shows that approaches that are currently used in prospective process upscaling may be modified to be invaluable for optimization of full-scale processes with a long operational history. (C) 2022 The Authors. Published by Elsevier B.V. CC_BY_4.0
查看更多>>摘要:Environment, social and governance (ESG) disclosures required on listed companies have aroused considerable interest in both academia and industry for sustainable development and investing. However, the authenticity and credibility of the ESG report exposed to the public remain in doubt due to black box-like reporting processes with massive human involvements. In this study, a framework of environmental smart reporting system (BI-ESRS) based on the blockchain and Internet of Things (IoT) technologies is developed to automate the acquisition of environment-related data and make the reporting reliable and traceable. In addition, we evaluate the authenticity of the data collected from IoT devices, considering human-made counterfeits on measuring instruments for greenwashing. It is anticipated to stimulate companies to submit high-quality data without fake. Specifically, an unsupervised neural network-enabled spatial-temporal analytics (UN-STA) method is devised to achieve anomalies detection and index the data with an authenticity rate. An artificial neural network, self-organizing map (SOM), is applied to construct the prediction model. The received signal strength indicator (RSSI) of Bluetooth low energy (BLE), the vibration amplitude of smart instruments and data uploading interval constitute the input vector for competitive learning. Finally, an experimental simulation is carried out to demonstrate the implementation of the proposed system and method, and their effectiveness has also been testified. Moreover, the sensitivity of the SOM model over the three factors has been analyzed by applying the control variate technique. This work is expected to serve as a reference for practitioners to satisfy similar requirements in the industry and inspire new ideas for scholars.0 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:Accurate and real-time estimation of iron ore sintering quality index is essential for the stability of the production process. However, the sintering process data is generally characterized by high dimensionality, collinearity, nonlinearity, and dynamic features, which seriously hinder the modeling performance. To cope with the complex properties mentioned above, an effective fusion of recurrent neural network with the gated recurrent unit and partial least squares (GRU-PLS) is introduced for ferrous oxide (FeO) prediction of the finished sinter. The proposed GRU-PLS model takes the advantage of conventional latent variable method but incorporates the deep inner structure between each pair of latent variables that captures the nonlinear and dynamic information simultaneously. The modeling performance of the proposed model is evaluated by the actual data collected from the iron ore sintering process in a large iron and steel group in South China. The results show the lowest prediction error of the GRU-PLS model in comparison with other counterparts. More specifically, the rootmean-square error of the GRU-PLS model is decreased by 35.29% in comparison with that for the recurrent neural network with the gated recurrent unit.
查看更多>>摘要:Given the tendency of food companies to adopt digital technologies throughout the food supply chain and the importance assumed by the security and reliability of these systems in Industry 4.0 applications, we carried out a systematic literature review of cybersecurity issues in food and beverage industry. The research methodology was defined based on the PRISMA guidelines, and a reference framework was created that leveraged a thematic analysis of the following content categories: definitions, focus on the food and beverage industry, characterization of cybersecurity, and management of cybersecurity risks. This analysis allowed us to compare the advancement in cybersecurity in the food and beverage industry with those proposed in the literature for the Industry 4.0 paradigm. This comparison is useful, since although the Industry 4.0 paradigm finds a wide range of applications in such domain (e.g., Agriculture 4.0), the structural characteristics of this sector, its supply chain and products create a specific context and situations that may require ad hoc cybersecurity strategies and solutions. The body of knowledge on the food and beverage industry resulted mature in terms of definitions, the target supply chain, industrial assets, and cyberthreats, but requires future research effort in terms of practical analysis that can fill the emerging gaps in relation to risks, countermeasures, vulnerabilities, solutions and guidelines. The results of our review guided discussions, implications and definition of future research routes.(c) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:The burgeoning development of the cloud market has promoted the expansion of resources held by cloud providers, but the resulting underutilization caused by the over-provisioned resources has become a challenge. To improve the utilization of these resources, in November 2009, AWS (Amazon Web services) first introduces an auction mechanism to provide users with these temporarily idle resources, which are named spot instances. After decades of development, AWS spot instance has become the largest public infrastructure with dynamic pricing based on the supply and demand of instance market resources. Other major cloud providers such as Microsoft Azure and GCP (Google Cloud Platform) have also introduced their spot-like instances. These instances are sold in a pay-as-you-go way at much lower prices than on-demand instances with the same configuration. Such price advantages have quickly attracted the attention of users needing a lot of computing resources. However, the availability of spot instances cannot be guaranteed. Therefore, methods for improving the availability of spot instances have been widely discussed. This paper presents a survey on the work related to spot instances, by first introducing the development history of spot instance pricing models, then summarizing the methods that can improve the availability of spot instances, and finally discussing how to better understand and use spot instances. We hope that cloud users can obtain enough knowledge in this article to be able to use spot instances from various cloud providers to provide themselves with cheap and stable computing resources.(c) 2022 Elsevier B.V. All rights reserved.
Yasuda, Yuri D. V.Cappabianco, Fabio A. M.Martins, Luiz Eduardo G.Gripp, Jorge A. B....
15页
查看更多>>摘要:Aircraft visual inspection is a procedure that aims at identifying problems with the vehicle structure. The visual inspection of aircraft is part of the activities of aircraft Maintenance, Repair and Overhaul (MRO), and combines multiple observation processes conducted by human inspectors to find irregularities and guarantee vehicle safety and readiness for flight. This paper presents a systematic literature review of methods and techniques used in procedures for the visual inspection of aircraft. It also shows some insights into the automation of these processes with robotics and computer vision. A total of 27 primary studies were considered, including methods, conceptual works, and other literature reviews. To the best of our knowledge, this is the first systematic literature review about vision-based aircraft inspection. The findings of this review show the deficiencies in the literature with regards to requirements specifications for the development, testing, and validation of methods. We also found a scarcity of publications in the aircraft inspection area and a lack of complete intelligent inspection systems in the literature. Despite these deficiencies, our findings also reinforce the potential for automating and improving visual inspection procedures. In addition to these findings, we also present the complete methodology we used for performing this systematic review. This methodology provides documentation of the process and the criteria for selecting and evaluating the studies. Researchers can use this review framework for future investigations in this area of interest. These results should encourage further works on computer vision and robotics techniques, requirements specification, development, integration, and systematic testing and validation. (c) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:In this article, we discuss the technical and business risks associated with long-lasting functional digital twins, and describe different strategies for their alleviation. Functional digital twins are based on physics based simulation models and are operated alongside the life cycle of their physical counterparts. These simulation-based digital twins are built using a simulation software. The problems with most of the commercial modeling and simulation tools are their black box nature and storing data in protective formats, leading to poor interoperability. Since the digital twins of certain assets need to be operated for a long period, even for several decades, there is a possibility that the computing infrastructure, i.e., the computing hardware and software, may not remain the same throughout the product or system life cycle. The computer hardware and operating systems are usually third-party components with limited choices for their users, whereas the selection of simulation tools is more flexible and the designer can choose from, for example, commercial, open-source, or in-house solutions. To avoid substantial costs or business disruption, the digital twin providers must be able to reproduce the underlying simulation models with up-to-date tools and adopt alternative solutions whenever needed. The findings of the study are presented in the form of propositions throughout the article.(c) 2022 The Author(s). Published by Elsevier B.V. CC_BY_4.0
Hymavathi, M.Rao, C. S. P.Bahubalendruni, M. V. A. RajuPrasad, V. S. S. Vara...
16页
查看更多>>摘要:Oblique-Directional Interference matrix (ODIM) is one of the important part relation models which is imperative for the assembly/disassembly sequence generation and Exploded View Generation (EVG). The automatic extraction of ODIM from a Three-Dimensional(3D) Computer-aided Design (CAD) assembly model with a large part count is computationally expensive and time-consuming. Therefore, in this paper, a novel simple yet effective method for automatic extraction of ODIM is proposed by using the contact relations, bounding box data and, the two-dimensional (2D) projections from the 3D-CAD assembly model. First, all the directions required for testing geometric feasibility are extracted using the contact relations from the selected 3D-CAD assembly model. Then, the collision detection between the parts along all the directions is tested using bounding box intersection and projection tests. Finally, the proposed method is tested on various products to create the exploded view of the 3D-CAD assembly model. (c) 2022 Elsevier B.V. All rights reserved.
查看更多>>摘要:In the fast evolving context of industry 4.0, companies or organizations must continuously evolve and improve. To maintain their business efficiency, they join industrial networks in which they have to collaborate. In more recent context of industry 5.0, humans are placed at the heart of the industrial processes by developing human-centric and society-centric approach. In such a context, human collaboration experiences are numerous and constitute meaningful pieces of knowledge which can be reused within a human centric approach. This needs to (i) formalize and capitalize collaboration experiences, (ii) enable actors to assess collaboration and, (iii) develop reusing mechanisms. This article proposes an experience feedback approach where collaboration experiences are formalized and directly assessed by actors who have collaborated. Assessment grids are proposed to guide the human evaluations. From the individual evaluations, aggregation mechanisms are proposed to compute the collaboration performance of organizations. Finally, a reusing mechanism allows to learn from prior experiences, identifying the best organizations to make collaborating for a new industrial process. (c) 2022 Elsevier B.V. All rights reserved.