Neural Networks2022,Vol.15212.DOI:10.1016/j.neunet.2022.04.021

Optimistic reinforcement learning by forward Kullback-Leibler divergence optimization

Kobayashi, Taisuke
Neural Networks2022,Vol.15212.DOI:10.1016/j.neunet.2022.04.021

Optimistic reinforcement learning by forward Kullback-Leibler divergence optimization

Kobayashi, Taisuke1
扫码查看

作者信息

  • 1. Nara Institute of Science & Technology,Nara Inst Sci & Technol
  • 折叠

Abstract

This paper addresses a new interpretation of the traditional optimization method in reinforcement learning (RL) as optimization problems using reverse Kullback-Leibler (KL) divergence, and derives a new optimization method using forward KL divergence, instead of reverse KL divergence in the optimization problems. Although RL originally aims to maximize return indirectly through optimization of policy, the recent work by Levine has proposed a different derivation process with explicit consideration of optimality as stochastic variable. This paper follows this concept and formulates the traditional learning laws for both value function and policy as the optimization problems with reverse KL divergence including optimality. Focusing on the asymmetry of KL divergence, the new optimization problems with forward KL divergence are derived. Remarkably, such new optimization problems can be regarded as optimistic RL. That optimism is intuitively specified by a hyperparameter converted from an uncertainty parameter. In addition, it can be enhanced when it is integrated with prioritized experience replay and eligibility traces, both of which accelerate learning. The effects of this expected optimism was investigated through learning tendencies on numerical simulations using Pybullet. As a result, moderate optimism accelerated learning and yielded higher rewards. In a realistic robotic simulation, the proposed method with the moderate optimism outperformed one of the state-of-the-art RL method. (C) 2022 Elsevier Ltd. All rights reserved.

Key words

Reinforcement learning/Control as probabilistic inference/Kullback-Leibler divergence/Optimistic learning

引用本文复制引用

出版年

2022
Neural Networks

Neural Networks

EISCI
ISSN:0893-6080
被引量1
参考文献量52
段落导航相关论文