A computing unloading strategy based on deep reinforcement learning for edge computing
The advent of Mobile Edge Computing(MEC)has introduced a prospective solution for the challenges faced by resource-limited mobile devices.In this study,we have explored a dynamic task offloading strategy that employs Deep Reinforcement Learning(DRL)techniques,with a particular focus on discrete events.Furthermore,an enhanced variant of the Deep Deterministic Policy Gradient(DDPG)algorithm,which operates within a continuous action space of DRL,has been introduced.This approach has been used to independently develop efficient computation offloading strategies for individual mobile users,facilitating smart decision-making between on-device computation and offloading to the edge.The simulation outcomes indicate that users can autonomously distribute the power for local processing and task offloading in response to their localized insights into the MEC system.
mobile edge computingdeep reinforcement learningdiscrete dynamic task offloading