Neural Networks2022,Vol.15215.DOI:10.1016/j.neunet.2022.04.009

Discovering diverse solutions in deep reinforcement learning by maximizing state-action-based mutual information

Osa, Takayuki Tangkaratt, Voot Sugiyama, Masashi
Neural Networks2022,Vol.15215.DOI:10.1016/j.neunet.2022.04.009

Discovering diverse solutions in deep reinforcement learning by maximizing state-action-based mutual information

Osa, Takayuki 1Tangkaratt, Voot 2Sugiyama, Masashi2
扫码查看

作者信息

  • 1. Kyushu Inst Technol
  • 2. RIKEN Ctr Adv Intelligence Project
  • 折叠

Abstract

Reinforcement learning algorithms are typically limited to learning a single solution for a specified task, even though diverse solutions often exist. Recent studies showed that learning a set of diverse solutions is beneficial because diversity enables robust few-shot adaptation. Although existing methods learn diverse solutions by using the mutual information as unsupervised rewards, such an approach often suffers from the bias of the gradient estimator induced by value function approximation. In this study, we propose a novel method that can learn diverse solutions without suffering the bias problem. In our method, a policy conditioned on a continuous or discrete latent variable is trained by directly maximizing the variational lower bound of the mutual information, instead of using the mutual information as unsupervised rewards as in previous studies. Through extensive experiments on robot locomotion tasks, we demonstrate that the proposed method successfully learns an infinite set of diverse solutions by learning continuous latent variables, which is more challenging than learning a finite number of solutions. Subsequently, we show that our method enables more effective few-shot adaptation compared with existing methods. (C) 2022 Elsevier Ltd. All rights reserved.

Key words

Reinforcement learning/Robot learning/Representation learning

引用本文复制引用

出版年

2022
Neural Networks

Neural Networks

EISCI
ISSN:0893-6080
被引量1
参考文献量42
段落导航相关论文