首页|MEC网络中基于深度确定策略梯度的能效优化

MEC网络中基于深度确定策略梯度的能效优化

扫码查看
移动边缘计算(mobile edge computing,MEC)技术能为用户提供数据处理服务,但MEC服务器的计算资源有限,用户合理地向MEC服务器迁移任务及MEC服务器基于任务要求给用户合理分配资源是提高用户端能效的关键因素.提出基于深度确定策略梯度的能效优化算法(deep deterministic policy gradient-based energy efficiency opti-mization,DDPG-EEO).在满足时延要求的前提下,建立关于任务卸载率和资源分配策略的最大化能效的优化问题.再将优化问题描述成马尔可夫决策过程(Markov decision process,MDP),并利用深度确定策略梯度求解.仿真结果表明,DDPG-EEO算法降低了 UTs端的能耗,并提高了任务完成率.
Deep Deterministic Policy Gradient-based Energy Efficiency Optimization Algorithm for MEC Networks
Although Mobile Edge Computing(MEC)technology can provide users with data process-ing services,the computing resources of MEC servers are also limited.Therefore,reasonable migration of tasks from users to MEC servers and reasonable allocation of resources from MEC servers to users based on task requirements are key factors to improve user energy efficiency.To solve this problem,a Deep Deterministic Policy Gradient-based Energy Efficiency Optimization(DDPG-EEO)algorithm is proposed.Under the premise of meeting with the time delay requirements,the optimization problem about the maximum energy efficiency of task offloading rate and resource allocation strategy is estab-lished.Then,the optimization problem is described as Markov Decision Process(MDP)and solved by Deterministic Policy Gradient-based.The simulation results show that the DDPG-EEO algorithm re-duces the energy consumption of UTs and improves the task accomplishment rate.

mobile edge computingtask offloadingresources allocationreinforcement learningdeep deterministic policy gradient

陈卡

展开 >

驻马店职业技术学院,河南 驻马店 463000

移动边缘计算 任务卸载 资源分配 强化学习 深度确定策略梯度

河南省科技攻关计划项目河南省软科学研究计划项目河南省高等教育教学改革研究与实践立项项目

2121022105161824004106082021SJGLX865

2024

火力与指挥控制
火力与指挥控制研究会,火力与指挥控制专业情报网

火力与指挥控制

CSTPCD北大核心
影响因子:0.312
ISSN:1002-0640
年,卷(期):2024.49(7)
  • 3