稳定且受限的新强化学习SAC算法
Novel Reinforcement Learning Algorithm:Stable Constrained Soft Actor Critic
海日 1张兴亮 2姜源 1杨永健1
作者信息
- 1. 吉林大学计算机科学与技术学院,长春 130012
- 2. 中国移动通信集团有限公司中国移动通信集团吉林有限公司,长春 130022
- 折叠
摘要
为解决由于固定温度SAC(Soft Actor Critic)算法中存在的Q函数高估可能会导致算法陷入局部最优的问题,通过深入分析提出了一个稳定且受限的SAC算法(SCSAC:Stable Constrained Soft Actor Critic).该算法通过改进最大熵目标函数修复固定温度SAC算法中的Q函数高估问题,同时增强算法在测试过程中稳定性的效果.最后,在4个OpenAI Gym Mujoco环境下对SCSAC算法进行了验证,实验结果表明,稳定且受限的SAC算法相比固定温度SAC算法可以有效减小Q函数高估出现的次数并能在测试中获得更加稳定的结果.
Abstract
To solve the problem that Q function overestimation may cause SAC(Soft Actor Critic)algorithm trapped in local optimal solution,SCSAC(Stable Constrained Soft Actor Critic)algorithm is proposed for perfectly resolving the above weakness hidden in maximum entropy objective function improving the stability of Stable Constrained Soft Actor Critic algorithm in trailing process.The result of evaluating Stable Constrained Soft Actor Critic algorithm on the suite of OpenAI Gym Mujoco environments shows less Q value overestimation appearance and more stable results in trailing process comparing with SAC algorithm.
关键词
强化学习/最大熵强化学习/Q值高估/SAC算法Key words
reinforcement learning/maximum entropy reinforcement learning/Q value overestimation/soft actor critic(SAC)algorithm引用本文复制引用
基金项目
吉林省发改委创新能力建设项目(2020C017-2)
吉林省科技发展计划重点项目(20210201082GX)
出版年
2024