Research on C-V2X Task Offloading Based on Deep Reinforcement Learning
The rapid development of driverless and assisted driving technologies has created a significant demand for enhanced vehicle computing performance.To address this demand,offloading techniques for joint Mobile Edge Computing(MEC)offer effective solutions.However,achieving fast and efficient task offloading decisions presents a significant challenge,and existing research has typically overlooked the overall system benefits associated with task offloading.To address these issues,a distributed task offloading system model for Cellular Vehicle-to-Everything(C-V2X)based on a Software-Defined Network(SDN)is designed using the vehicle-road-air architecture.A task offloading control algorithm based on Deep Reinforcement Learning(DRL)is proposed.Cost models are constructed for three modes of task:local computing,edge computing,and satellite computing.The objective function is constructed to jointly optimize two sets of criteria.On the user side,it includes vehicle energy consumption and resource leasing costs,while on the server side,it includes task processing delay and server load balance.Considering constraints such as maximum expected task delay and maximum server load ratio,the problem of task offloading is formulated as a Mixed-Integer Nonlinear Programming(MINLP)problem,which is modeled as a Markov decision process in a discrete-continuous mixed action space.Finally,task offloading decisions regarding task scheduling,resource leasing,and power control are obtained based on DRL algorithms.The experimental results show that,compared with traditional schemes based on Particle Swarm Optimization(PSO)and Genetic Algorithms(GA),the proposed algorithm achieves similar decision-making benefits while reducing the single-decision delay by more than 45%.
Deep Reinforcement Learning(DRL)task offloadingCellular Vehicle-to-Everything(C-V2X)Software-Defined Network(SDN)Genetic Algorithm(GA)