基于离线强化学习的研究综述
Survey of Research on Offline Reinforcement Learning
陈锶奇 1耿婕 2汪云飞 1余伟驰 1赵佳宁 3王仕超1
作者信息
- 1. 重庆交通大学信息科学与工程学院,重庆 400074
- 2. 天津大学胸科医院,天津 300072
- 3. 天津大学智能与计算学部,天津 300072
- 折叠
摘要
离线强化学习作为一种新兴范式,凭借其无需与环境交互即可利用大量离线数据进行策略学习的特性,展现出了很高的应用潜力和价值,特别是在医疗、自动驾驶等高风险领域中具有显著优势.从离线强化学习的基本概念、核心问题、主要方法依次展开,重点介绍多种缓解主要问题的方法:分布偏移的策略,包括约束目标策略与行为策略对齐、价值函数约束、模型不确定性量化以及基于模型的离线强化学习方法.讨论了目前离线强化学习的模拟环境以及重要应用场景.
Abstract
Offline reinforcement learning,as an emerging paradigm,leverages a vast amount of offline data for learning without the need of active interactions with the environment.It demonstrates high potential and value,especially in high-risk fields such as health-care and autonomous driving.This review will sequentially unfold from the basic concepts of offline reinforcement learning,core issues,main methods,and focus on introducing various strategies to mitigate distributional shift.These include constraining target policy and behavior policy alignment,value function constraints,quantification of model uncertainty,and model-based offline reinforcement learn-ing methods.Finally,the article discusses current simulation environments for offline reinforcement learning and significant application scenarios.
关键词
强化学习/离线强化学习/自动决策/外推误差Key words
reinforcement learning/offline reinforcement learning/automated decision-making/extrapolation errors引用本文复制引用
基金项目
国家自然科学基金(61602391)
天津市科技计划项目(22JCZDJC00580)
出版年
2024