首页|面向边缘计算的一种基于深度强化学习的计算卸载策略

面向边缘计算的一种基于深度强化学习的计算卸载策略

扫码查看
随着移动边缘计算(Mobile Edge Computing,MEC)的兴起,为解决资源受限的移动设备,文章提出了一种有前景的解决方案,主要研究了一种利用深度强化学习(DRL)技术的动态任务卸载策略,该策略针对离散事件进行了特别设计,同时提出一种优化后的DDPG算法的连续动作空间DRL方法,利用此方法,独立地为每位移动用户定制了高效的计算卸载策略,实现了在用户端的本地计算与边缘计算之间的智能决策。通过仿真实验结果可以看出每个用户可以根据对MEC系统的局部观测,自适应地分配本地执行和任务卸载的功率。
A computing unloading strategy based on deep reinforcement learning for edge computing
The advent of Mobile Edge Computing(MEC)has introduced a prospective solution for the challenges faced by resource-limited mobile devices.In this study,we have explored a dynamic task offloading strategy that employs Deep Reinforcement Learning(DRL)techniques,with a particular focus on discrete events.Furthermore,an enhanced variant of the Deep Deterministic Policy Gradient(DDPG)algorithm,which operates within a continuous action space of DRL,has been introduced.This approach has been used to independently develop efficient computation offloading strategies for individual mobile users,facilitating smart decision-making between on-device computation and offloading to the edge.The simulation outcomes indicate that users can autonomously distribute the power for local processing and task offloading in response to their localized insights into the MEC system.

mobile edge computingdeep reinforcement learningdiscrete dynamic task offloading

程耀东、田润鑫

展开 >

西京学院,陕西 西安 710123

移动边缘计算 深度强化学习 离散动态任务卸载

2024

无线互联科技
江苏省科学技术情报研究所

无线互联科技

影响因子:0.263
ISSN:1672-6944
年,卷(期):2024.21(13)
  • 5