Vehicle edge computing task offloading decision based on improved TD3 algorithm
A task offloading strategy based on Vehicle Edge Computing (VEC) is designed to meet the requirements of complex vehicular tasks in terms of latency,energy consumption,and computational performance,while reducing network resource competition and consumption. The goal is to minimize the long-term cost balancing between task processing latency and energy consumption. The task offloading problem in vehicular networks is modeled as a Markov Decision Process (MDP). An improved algorithm,named LN-TD3,is proposed building upon the traditional Twin Delayed Deep Deterministic Policy Gradient (TD3). This improvement incorporates Long Short-Term Memory (LSTM) networks to approximate the policy and value functions. The system state is normalized to accelerate network convergence and enhance training stability. Simulation results demonstrate that LN-TD3 outperforms both fully local computation and fully offloaded computation by more than two times. In terms of convergence speed,LN-TD3 exhibits approximately a 20% improvement compared to DDPG and TD3.
VECTD3 algorithmtask offloadingdeep reinforcement learningmarkov decision process