A Multi-user Computation Offloading Optimization Model and Al-gorithm Based on Deep Reinforcement Learning
In Mobile Edge Computing (MEC) intensive deployment scenarios, the uncertainty of edge server load can easily cause edge server overload, leading to a significant increase in delay and energy consumption during the computation offloading process. In response to this issue, a multi-user computation offloading optimization model and algorithm based on Deep Deterministic Policy Gradient (DDPG) is proposed. Firstly, considering the balance optimization of delay and energy consumption, a utility function is established to maximize system utility as the optimization objective, and the computational offloading problem is transformed into a mixed integer nonlinear programming problem. Then, in response to the problem of large state space and coexistence of discrete and continuous variables in the action space, the DDPG deep reinforcement learning algorithm is discretized and improved. Based on this, a multi-user computation offloading optimization method is proposed. Finally, this method is used to solve nonlinear programming problems. The simulation experimental results show that compared with existing algorithms, the proposed method can effectively reduce the probability of edge server overload and has good stability.
Mobile Edge Computing(MEC)Computation offloadingDeep reinforcement learningResource allocation