Proximal Policy Optimization Algorithm for UAV-assisted MEC Vehicle Task Offloading and Power Control
The architecture of Mobile Edge Computing(MEC),assisted by Unmanned Aerial Vehicles(UAVs),is an efficient model for flexible management of mobile computing-intensive and delay-sensitive tasks.Nevertheless,achieving an optimal balance between task latency and energy consumption during task processing has been a challenging issue in vehicular communication applications.To tackle this problem,this paper introduces a model for optimizing task offloading and power control in vehicle networks based on UAV-assisted mobile edge computing architecture,using a Non-Orthogonal Multiple Access(NOMA)approach.The proposed model takes into account dynamic factors like vehicle high mobility and wireless channel time-variations.The problem is modeled as a Markov decision process.A distributed deep reinforcement learning algorithm based on Proximal Policy Optimization(PPO)is proposed,enabling each vehicle to make autonomous decisions on task offloading and related transmission power based on its own perceptual local information.This achieves the optimal balance between task latency and energy consumption.Simulation results reveal that the proposed proximal policy optimization algorithm for task offloading and power control scheme not only improves the performance of task latency and energy consumption compared to existing methods,The average system cost performance improvement is at least 13%or more.but also offers a performance-balanced optimization method.This method achieves optimal balance between the system task latency and energy consumption level by adjusting user preference weight factors.
Unmanned Aerial Vehicles(UAVs)assisted computingMobile Edge Computing(MEC)Proximal Policy Optimization(PPO)Deep reinforcement learningPower control and task offloading