首页|Task offloading strategy based on improved double deep Q network in smart cities

Task offloading strategy based on improved double deep Q network in smart cities

扫码查看
Abstract With the rapid development of smart cities, edge computing is confronted with the challenges of a sharp increase in the number of devices and computing tasks. How to efficiently perform task offloading to optimize the utilization of computing resources and reduce latency and energy consumption has become an urgent problem to be solved. This paper proposes a task offloading strategy based on an improved Double Deep Q-Network (DDQN), by designing a new reward function and optimizing the experience replay mechanism to enhance the success rate of task offloading and the learning efficiency of the agent. Additionally, considering that task execution may fail due to excessive load, this paper proposes a load balancing remedial strategy and improves the heuristic sub-algorithm in the greedy algorithm based on the characteristics of intensive tasks to increase the success rate of task offloading. Experimental results show that, compared with three other baseline algorithms in scenarios with different device densities, the proposed algorithm in this paper achieves significant improvements in important indicators such as task success rate, waiting time, and energy consumption.

Bin Wu、Liwen Ma、Jia Cong、Jie Zhao、Yue Yang

展开 >

Tianjin Chengjian University

2025

Wireless networks: The journal of mobile communication, computation and information
  • 35