首页|Teacher-apprentices RL (TARL): leveraging complex policy distribution through generative adversarial hypernetwork in reinforcement learning

Teacher-apprentices RL (TARL): leveraging complex policy distribution through generative adversarial hypernetwork in reinforcement learning

扫码查看
Abstract Typically, a Reinforcement Learning (RL) algorithm focuses in learning a single deployable policy as the end product. Depending on the initialization methods and seed randomization, learning a single policy could possibly leads to convergence to different local optima across different runs, especially when the algorithm is sensitive to hyper-parameter tuning. Motivated by the capability of Generative Adversarial Networks (GANs) in learning complex data manifold, the adversarial training procedure could be utilized to learn a population of good-performing policies instead. We extend the teacher-student methodology observed in the Knowledge Distillation field in typical deep neural network prediction tasks to RL paradigm. Instead of learning a single compressed student network, an adversarially-trained generative model (hypernetwork) is learned to output network weights of a population of good-performing policy networks, representing a school of apprentices. Our proposed framework, named Teacher-Apprentices RL (TARL), is modular and could be used in conjunction with many existing RL algorithms. We illustrate the performance gain and improved robustness by combining TARL with various types of RL algorithms, including direct policy search Cross-Entropy Method, Q-learning, Actor-Critic, and policy gradient-based methods.

Reinforcement learningHypernetworkGenerative modelTeacher-apprentices

Shi Yuan Tang、Athirai A. Irissappane、Frans A. Oliehoek、Jie Zhang

展开 >

Nanyang Technological University||Alibaba-NTU Singapore Joint Research Institute

University of Washington

Delft University of Technology

Nanyang Technological University

展开 >

2023

Autonomous agents and multi-agent systems

Autonomous agents and multi-agent systems

EISCI
ISSN:1387-2532
年,卷(期):2023.37(2)
  • 61