首页|Task offloading strategy of vehicle edge computing based on reinforcement learning

Task offloading strategy of vehicle edge computing based on reinforcement learning

扫码查看
The rapid development of edge computing has an impact on the Internet of Vehicles (IoV). However, the high-speed mobility of vehicles makes the task offloading delay unstable and unreliable. Hence, this paper studies the task offloading problem to provide stable computing, communication and storage services for user vehicles in vehicle networks. The offloading problem is formulated to minimize cost consumption under the maximum delay constraint by jointly considering the positions, speeds and computation resources of vehicles. Due to the complexity of the problem, we propose the vehicle deep Q-network (V-DQN) algorithm. In V-DQN algorithm, we firstly propose a vehicle adaptive feedback (VAF) algorithm to obtain the priority setting of processing tasks for service vehicles. Then, the V-DQN algorithm is implemented based on the result of VAF to realize task offloading strategy. Specially, the interruption problem caused by the movement of the vehicle is formulated as a return function to evaluate the task offloading strategy. The simulation results show that our proposed scheme significantly reduces cost consumption.

IoVMobile edge computingTask offloadingDynamicV-DQNALLOCATION

Wang, Lingling、Zhou, Wenjie、Zhai, Linbo

展开 >

Shandong Normal Univ

2025

Journal of network and computer applications

Journal of network and computer applications

SCI
ISSN:1084-8045
年,卷(期):2025.239(Jul.)
  • 46