首页|Task offloading strategy based on improved double deep Q network in smart cities
Task offloading strategy based on improved double deep Q network in smart cities
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Springer Nature
Abstract With the rapid development of smart cities, edge computing is confronted with the challenges of a sharp increase in the number of devices and computing tasks. How to efficiently perform task offloading to optimize the utilization of computing resources and reduce latency and energy consumption has become an urgent problem to be solved. This paper proposes a task offloading strategy based on an improved Double Deep Q-Network (DDQN), by designing a new reward function and optimizing the experience replay mechanism to enhance the success rate of task offloading and the learning efficiency of the agent. Additionally, considering that task execution may fail due to excessive load, this paper proposes a load balancing remedial strategy and improves the heuristic sub-algorithm in the greedy algorithm based on the characteristics of intensive tasks to increase the success rate of task offloading. Experimental results show that, compared with three other baseline algorithms in scenarios with different device densities, the proposed algorithm in this paper achieves significant improvements in important indicators such as task success rate, waiting time, and energy consumption.