针对基于移动边缘计算(mobile edge computing,MEC)的双层蜂窝网络中由于移动设备的任务迁移而产生额外开销的问题,在移动感知下通过联合任务卸载和资源分配来减少任务迁移概率,进而最大化用户总收益.首先,提出了最大化用户总收益的最优化问题;其次,在考虑时变的计算任务和资源分配下,将最优化问题描述为一个马尔科夫决策过程(Markov decision process,MDP),同时,提出了一个新颖的采用基于Q-学习的强化学习算法(reinforcement learning-based algorithm with Q-learning method,RLAQM)进行求解;最后,仿真验证了所提出的算法与其他算法相比能明显提高用户总收益.
Reinforcement learning-based task offloading and resource allocation in MEC networks
In order to solve the problem of extra cost caused by task migration of mobile devices in two-layer cellular networks based on mobile edge computing(MEC),a joint mobile awareness task offloading and resource allocation strategy is proposed to reduce the probability of task migration and maximize the total user revenue.Firstly,the problem of maximizing the total user revenue is presented.Secondly,considering time-varying computation tasks and resource allocation,the optimization problem is described as a Markov decision process(MDP).At the same time,a novel reinforcement learning-based algorithm with Q-learning method(RLAQM)is proposed to solve the MDP.Finally,simulation results show that compared with other algorithms,the proposed algorithm can significantly improve the total revenue of users.
mobile edge computingtask offloadingmobile awarenessMarkov decision process(MDP)reinforcement learning