首页|MEC网络中基于强化学习的任务卸载和资源分配

MEC网络中基于强化学习的任务卸载和资源分配

扫码查看
针对基于移动边缘计算(mobile edge computing,MEC)的双层蜂窝网络中由于移动设备的任务迁移而产生额外开销的问题,在移动感知下通过联合任务卸载和资源分配来减少任务迁移概率,进而最大化用户总收益.首先,提出了最大化用户总收益的最优化问题;其次,在考虑时变的计算任务和资源分配下,将最优化问题描述为一个马尔科夫决策过程(Markov decision process,MDP),同时,提出了一个新颖的采用基于Q-学习的强化学习算法(reinforcement learning-based algorithm with Q-learning method,RLAQM)进行求解;最后,仿真验证了所提出的算法与其他算法相比能明显提高用户总收益.
Reinforcement learning-based task offloading and resource allocation in MEC networks
In order to solve the problem of extra cost caused by task migration of mobile devices in two-layer cellular networks based on mobile edge computing(MEC),a joint mobile awareness task offloading and resource allocation strategy is proposed to reduce the probability of task migration and maximize the total user revenue.Firstly,the problem of maximizing the total user revenue is presented.Secondly,considering time-varying computation tasks and resource allocation,the optimization problem is described as a Markov decision process(MDP).At the same time,a novel reinforcement learning-based algorithm with Q-learning method(RLAQM)is proposed to solve the MDP.Finally,simulation results show that compared with other algorithms,the proposed algorithm can significantly improve the total revenue of users.

mobile edge computingtask offloadingmobile awarenessMarkov decision process(MDP)reinforcement learning

陈雷

展开 >

中国刑事警察学院公安信息技术与情报学院,辽宁沈阳 110854

移动边缘计算 任务卸载 移动感知 马尔科夫决策过程 强化学习

中国刑事警察学院重大培育项目沈阳市社会治理科技专项辽宁省教育厅科研项目中国刑事警察学院校级项目

D202300222-322-3-35ZGXJ2020005D2022045

2024

武汉大学学报(工学版)
武汉大学

武汉大学学报(工学版)

CSTPCD北大核心
影响因子:0.621
ISSN:1671-8844
年,卷(期):2024.57(3)
  • 29