预测资源分配:马尔可夫决策过程的无监督学习
Predictive resource allocation:unsupervised learning of Markov decision processes
吴佳骏 1赵剑羽 1孙乘坚 1杨晨阳1
作者信息
- 1. 北京航空航天大学电子信息工程学院,北京 100191
- 折叠
摘要
当已知未来的移动轨迹等信息时,面向视频点播业务的预测资源分配可以在满足用户体验的前提下降低基站能耗或提高网络吞吐量.传统的预测资源分配方法采用先预测用户轨迹等信息再优化功率等资源分配的方法,在预测窗较长时预测误差大,导致预测所带来的增益降低.为了解决这个问题,近期已有文献把预测资源分配建模为马尔可夫决策过程,采用深度强化学习进行在线决策.然而,对于这类适于采用强化学习的马尔可夫决策过程,现有文献往往以试错的方式对状态进行设计.此外,对于有约束的优化问题,现有利用强化学习解决无线问题的方法大多通过在奖励函数上加入包含需要手动调节超参数的惩罚项满足约束.本文以移动用户视频播放不卡顿约束下使基站发射能耗最小的问题为例,提出在线求解预测资源分配的无监督深度学习方法对信息预测和资源分配进行联合优化,并建立这种方法与深度强化学习的联系.所提出的方法可以通过在线端到端无监督深度学习提高预测资源分配的性能,能以系统化而非试错式的方式设计状态,可以自动而非通过引入超参来满足复杂的约束.仿真结果表明,所提出的在线无监督深度学习与深度强化学习所达到的发射能耗相近,但能够简化状态的设计,验证了理论分析结果.
Abstract
When future information of a mobile user such as trajectory is known,predictive resource allocation for video on-demand service can reduce energy consumption of base station or increase network throughput with ensured user experience.Traditional methods for predictive resource allocation first predict user information(say trajectory)and then optimize resource(say power)allocation.However,the prediction accuracy degrades as the prediction horizon increases.To deal with this issue,several recent works employed deep reinforcement learning for online decision-making by formulating the predictive resource allocation problem as Markov decision process(MDP).However,for this kind of MDP problems that is appropriately solved by reinforcement learning,existing works design the state in a trial-and-error manner.For constrained optimization problems,most existing reinforcement learning methods for wireless problems add penalty terms to the reward function with manually adjustable hyper-parameters to satisfy the constraints.This paper proposes an unsupervised deep learning method for online predictive resource allocation in an end-to-end manner,which can jointly predict information and optimize resource allocation.The proposed method is able to improve the performance of predictive resource allocation by online end-to-end unsupervised deep learning,and can systematically design the state of MDP and satisfy complex constraints such that the tedious trial-and-error methods for designing state and satisfying constraints are no longer necessary.We analyze the relationship between the unsupervised deep learning and deep reinforcement learning.Simulation results show that the proposed method needs almost the same energy consumption as deep reinforcement learning with a simplified state design process,which verifies the theoretical analysis.
关键词
预测资源分配/马尔可夫决策过程/无监督深度学习/深度强化学习/状态设计/复杂约束Key words
predictive resource allocation/Markov decision process/unsupervised deep learning/deep reinforcement learning/state design/complex constraint引用本文复制引用
基金项目
国家重点研发计划(2022YFB2902002)
国家自然科学基金重点项目(61731002)
国家自然科学基金面上项目(62271024)
出版年
2024