查看更多>>摘要:This paper presents a flexible framework to simulate transport systems based on automated guided vehicles (AGVs). The framework used to perform the simulation also serves to implement the different policies and algorithms in the global control system. For all mobile platforms, regardless of their application, there is great interest in having the model in a simulation environment. However, transport systems tend to be complex, with many vehicles needing to execute many tasks at the same time. Furthermore, when analyzing the global control system, there is no need for a detailed simulation of each AGV. That is why most researchers do not use modular control frameworks, such as the robotic operating system (ROS), to simulate the global system. Instead, they use specific simulation tools that require the definition of the simulated scheduling, routing and allocation policies in specific languages or models. As an alternative, our approach extends the global control framework by replacing the on-board control modules and devices by an event simulator that models the AGV behavior statistically. With this approach, the control policies and methods are only implemented once; they are used and tested in simulation and later in the final system. Also, tasks are easily model and verified using a Petri Net-based model. This model is directly implemented in the executive module that coordinates all the other modules in the framework. This simulation framework has been used during the implementation of several projects which included an autonomous logistics system for hospitals, a warehouse internal transport system using autonomous forklifts, and a factory logistics transport system using tugger trains. Examples of how the simulation can help in the design of some parameters such as fleet size and scheduling policies are also included in the paper.
查看更多>>摘要:The interplay between the non-orthogonal multiple access (NOMA) and the opportunistic cognitive radio (CR)-based orthogonal frequency multiple access (OFDMA) has been recently realized as a promising paradigm to support the unprecedented massive connectivity demands of future beyond fifth-generation (B5G) wireless communication systems. In such systems, which are called multi-carrier NOMA CR-based systems, each licensed band reserved for primary users can be opportunistically utilized based on power-domain NOMA to serve a group of secondary users simultaneously. An important challenge in this domain is how to provide energy-efficient resource allocation techniques that attempt to strike a balance between the total throughput (i.e., the achieved sum-rate) and the power required to achieve that rate while satisfying network QoS demands and being aware of the unique characteristics of the CR operating environment. In this paper, we propose an energy-efficient resource allocation technique for multi-carrier NOMA CR-based systems, which aims at maximizing the overall energy efficiency (EE) of the system under a set of CR and NOMA constraints. The EE maximization problem is shown to be a fractional non-convex optimization, which is, in general, hard to optimize. To deal with the fractional and the non-convexity nature of the formulated EE maximization problem, we exploit the Dinkelbach's algorithm to transfer the EE problem to a parameterized optimization problem. Then we use an iterative optimization approach to obtain the solution for the EE maximization problem. Simulation results reveal that this EE maximization-based resource allocation technique outperforms the existing resource allocation techniques in terms of the overall EE of the system while striking a good balance between the sum-rate and the transmit power consumption.
查看更多>>摘要:This paper investigates a multi-objective optimization distributed no-wait permutation flow shop scheduling problem under the constraint of sequence dependent setup time. Our optimization case is minimizing the makespan and maximum tardiness criteria. Therefore, our main objective will be to find the optimal jobs sequence that minimizes a function representing the two criteria of makespan and maximum tardiness. This function will be linearly dependent of these two criteria via a weighting parameter for each criterion. To solve this industrial problem, we propose the mixed integer linear programming (MILP) and a set of efficient metaheuristics solving different size instances. To this end, we suggest three inspired nature metaheuristics: The genetic algorithm (GA), the artificial bee colony (ABC) algorithm and migratory bird optimization (MBO) algorithm. We suggest a total of six new algorithms based on nature-inspired metaheuristics. Also, two constructive heuristics are used, the greedy randomized adaptive search procedure (GRASP) and Nawaz–Enscore–Ham (NEH) algorithms. It was revealed that GA algorithm with NEH initialization gives the best results comparing to the other metaheuristics.
查看更多>>摘要:Serverless computing is emerging as a cloud computing paradigm that provisions computing resources on demand, while billing is taking place based on the exact usage of the cloud resources. The responsibility for infrastructure management is undertaken by cloud providers, enabling developers to focus on the development of the business logic of their applications. For managing scalability, various autoscaling mechanisms have been proposed that try to optimize the provisioning of resources based on the posed workload. These mechanisms are configured and managed by the cloud provider, imposing non negligible administration overhead. A set of challenges are identified for introducing automation and optimizing the provisioning of resources, while in parallel respecting the agreed Service Level Agreement between cloud and application providers. To address these challenges, we have developed autoscaling mechanisms for serverless applications that are powered by Reinforcement Learning (RL) techniques. A set of RL environments and agents have been implemented (based on Q-learning, DynaQ+ and Deep Q-learning algorithms) for driving autoscaling mechanisms, able to autonomously manage dynamic workloads with Quality of Service (QoS) guarantees, while opting for efficient usage of resources. The produced environments and agents are evaluated in real and simulated environments, taking advantage of the Kubeless open-source serverless platform. The evaluation results validate the suitability of the proposed mechanisms to efficiently tackle scalability management for serverless applications.
查看更多>>摘要:Workflow support typically focuses on single simulation experiments. This is also the case for simulation based on finite element methods. If entire simulation studies shall be supported, flexible means for intertwining revising the model, collecting data, executing, and analysing simulation experiments are required. Artefact-based workflows present one means to support entire simulation studies, as has been shown for stochastic discrete-event simulation. To adapt the approach to finite element methods, the set of artefacts (i.e., conceptual model, requirement, simulation model, and simulation experiment), and the constraints that apply are extended by new artefacts, such as geometrical model, input data, and simulation data. Artefacts, their lifecycles, and constraints are revisited revealing features both types of simulation studies share and those they vary in. The potential benefits of exploiting an artefact-based workflow approach are shown based on a concrete simulation study. To those benefits belong guidance to systematically conduct simulation studies, reduction of effort by automatically executing specific steps, e.g., generating and executing convergence tests, and support for the automatic reporting of provenance.
查看更多>>摘要:The discrete element method (DEM) is frequently used for the numerical analysis of rock fractures. The DEM model requires the specification of microparameters that cannot be measured and are not directly related to the macroscopic properties of the material. Therefore, a calibration process for the microparameters is required to simulate the rock behavior. Since the calibration is usually performed iteratively using the trial and error method, several DEM simulations are run. Therefore, the calibration process is computationally expensive. In this work, we propose a calibration method based on the response surface methodology (RSM) that significantly reduces the numerical simulations to be performed during the calibration process, and thus, the associated computational cost. This methodology is novelty applied to impact problems and, it is validated by means of a benchmark, the good agreement obtained between the predicted and observed results verifies the applicability of the proposed method. A force-penetration curve is obtained through this approach, and the energy–time history is plotted. Moreover, several samples are tested to demonstrate the robustness of the solution. The effect of the stress wave under infinite boundary condition is studied. The method is capable of capturing the stress wave movement, and the velocity of the stress wave computed from the numerical results is in good agreement with experimental data.
查看更多>>摘要:This paper deals with Stochastic Reward Nets (SRN), which are a powerful extension of Generalized Stochastic Petri Nets (GSPN). SRN have proved their usefulness in modelling and analysis of performance, availability and reliability of complex timed systems. SRN are supported by special-case tools like the Stochastic Petri Net Package (SPNP) which enables both analytic (based on Markov Reward Models) and simulative studies. The work described in this paper argues that there is still a gap in SRN analysis concerning functional correctness and non-deterministic property checking. Toward this a novel approach is proposed, which is based on two developed tools. First a formal reduction of SRN onto the Timed Automata (TA) of the popular Uppaal toolbox was defined and implemented. The Uppaal reduction enables a more complete investigation of SRN models, not allowed by existing SRN tools. However, the practical use of Uppaal forbids to study the performance of large models. Then, an SRN kernel, inspired by the carried formal Uppaal modelling and reasoning, was achieved in Java using the Theatre actor system. The realization supports the parallel simulation of scalable models. The paper applies the developed tools to a realistic grid-computing model and reports some experimental results, together with good execution performance (speedup) when using a scalable version of the grid model on a shared memory multi-core machine.
查看更多>>摘要:With the widespread use of deep learning (DL) related techniques, many end-to-end planetary gearbox fault diagnosis models have been proposed. The input samples of DL models are often composed of time-domain signals, frequency-domain signals, or time–frequency domain signals from multiple rotation periods. So, redundant information is unavoidable, which may cause difficulty in the extraction of subtle fault features by the DL model and affect the accuracy of fault diagnosis. In addition, DL-based intelligent fault diagnosis (IFD) methods are developed based on a large amount of data. It is difficult to realize real-time fault diagnosis by relying on local equipment with limited computing power. Therefore, this paper proposes a deep residual networks-based IFD method of planetary gearboxes in cloud environments. A cloud-based IFD design is proposed to use the super computing power of cloud computing to solve the related issues caused by the insufficient computing power of local equipment. This method takes the wavelet time–frequency images of vibration signals as the network input. In order to obtain high accuracy, a method specialized for the selection of wavelet basis function (WBF) is developed based on the difference between the original signals and the reconstructed wavelet signals. Channel attention depth residual networks (CADRN) are used as the diagnosis model, and the channel attention module (CAM) is applied to enhance important fault features and improve the diagnosis accuracy. An ablation study and comparative experiments with three mainstream methods were carried out on a real-world dataset of planetary gearboxes, and the proposed method achieved more than 99% average accuracy rate, which verified the effectiveness of the proposed method.
查看更多>>摘要:Researchers have used the simulation technique to develop new networks and test, modify, and optimize existing ones. The scientific community has developed a wide range of network simulators to fulfil these objectives and facilitate this creative process. However, selecting a suitable simulator appropriate for a given purpose requires a comprehensive study of network simulators. The current literature on network simulators has limitations. Limited simulators have been included in the studies with functional and performance criteria appropriate for comparison not been considered, and a reasonable selection model for selecting the suitable simulator has not been presented. To overcome these limitations, we studied twenty-three existing network simulators with classifications, additional comparison parameters, system limitations, and comparisons using several criteria.
查看更多>>摘要:In this work, an improved version of the gateway based multi-hop routing protocol was studied. The MGEAR protocol is mainly used for prolonging network lifetime in homogeneous wireless sensor network. Herein, the proposed approach aims to prolong network lifetime and enhance the throughput of this protocol in the case of heterogeneous wireless sensor networks (HWSNs). In the MGEAR, the network is divided into several fields; sensor nodes in the first field communicate directly with the base station. Sensors in the center of the network send their data to the gateway which perform data fusion and spread to the base station. The rest of nodes are divided into two equal regions, in each region sensor nodes are grouped into clusters with a leading node as cluster-head. The central point of our approach is the selection of cluster-heads which is based on a ratio between the residual energy of each sensor node and the average energy of the region which it belongs. In order to equalize the load and prolong the lifetime of sensors, the cluster-head election probability is computed in each round according to the residual energy of each sensor node. Finally, the simulation results showed that this model had higher throughput and increased the lifetime by 130%, 151%, 167%, 171%, and 215% compared to HCR, ERP, ModLEACH, D-MSEP and DDEEC protocols respectively in the case of 2-level heterogeneity. In case of 3-level heterogeneity, the network lifetime is increased by 123%, 150%, 163% and 218% compared to HCP, ModLEACH, hetSEP and hetDEEC.