MEC网络中基于深度确定策略梯度的能效优化
Deep Deterministic Policy Gradient-based Energy Efficiency Optimization Algorithm for MEC Networks
陈卡1
作者信息
- 1. 驻马店职业技术学院,河南 驻马店 463000
- 折叠
摘要
移动边缘计算(mobile edge computing,MEC)技术能为用户提供数据处理服务,但MEC服务器的计算资源有限,用户合理地向MEC服务器迁移任务及MEC服务器基于任务要求给用户合理分配资源是提高用户端能效的关键因素.提出基于深度确定策略梯度的能效优化算法(deep deterministic policy gradient-based energy efficiency opti-mization,DDPG-EEO).在满足时延要求的前提下,建立关于任务卸载率和资源分配策略的最大化能效的优化问题.再将优化问题描述成马尔可夫决策过程(Markov decision process,MDP),并利用深度确定策略梯度求解.仿真结果表明,DDPG-EEO算法降低了 UTs端的能耗,并提高了任务完成率.
Abstract
Although Mobile Edge Computing(MEC)technology can provide users with data process-ing services,the computing resources of MEC servers are also limited.Therefore,reasonable migration of tasks from users to MEC servers and reasonable allocation of resources from MEC servers to users based on task requirements are key factors to improve user energy efficiency.To solve this problem,a Deep Deterministic Policy Gradient-based Energy Efficiency Optimization(DDPG-EEO)algorithm is proposed.Under the premise of meeting with the time delay requirements,the optimization problem about the maximum energy efficiency of task offloading rate and resource allocation strategy is estab-lished.Then,the optimization problem is described as Markov Decision Process(MDP)and solved by Deterministic Policy Gradient-based.The simulation results show that the DDPG-EEO algorithm re-duces the energy consumption of UTs and improves the task accomplishment rate.
关键词
移动边缘计算/任务卸载/资源分配/强化学习/深度确定策略梯度Key words
mobile edge computing/task offloading/resources allocation/reinforcement learning/deep deterministic policy gradient引用本文复制引用
基金项目
河南省科技攻关计划项目(212102210516)
河南省软科学研究计划项目(182400410608)
河南省高等教育教学改革研究与实践立项项目(2021SJGLX865)
出版年
2024