中国科学:信息科学(英文版)2024,Vol.67Issue(7) :236-251.DOI:10.1007/s11432-023-4028-1

Ensemble successor representations for task generalization in offline-to-online reinforcement learning

Changhong WANG Xudong YU Chenjia BAI Qiaosheng ZHANG Zhen WANG
中国科学:信息科学(英文版)2024,Vol.67Issue(7) :236-251.DOI:10.1007/s11432-023-4028-1

Ensemble successor representations for task generalization in offline-to-online reinforcement learning

Changhong WANG 1Xudong YU 1Chenjia BAI 2Qiaosheng ZHANG 3Zhen WANG4
扫码查看

作者信息

  • 1. Space Control and Inertial Technology Research Center,Harbin Institute of Technology,Harbin 150001,China
  • 2. Shanghai Artificial Intelligence Laboratory,Shanghai 200232,China;Shenzhen Research Institute of Northwestern Polytechnical University,Shenzhen 518057,China
  • 3. Shanghai Artificial Intelligence Laboratory,Shanghai 200232,China
  • 4. School of Cybersecurity,Northwestern Polytechnical University,Xi'an 710072,China
  • 折叠

Abstract

In reinforcement learning(RL),training a policy from scratch with online experiences can be inefficient because of the difficulties in exploration.Recently,offline RL provides a promising solution by giving an initialized offline policy,which can be refined through online interactions.However,existing approaches primarily perform offline and online learning in the same task,without considering the task generalization problem in offline-to-online adaptation.In real-world applications,it is common that we only have an offline dataset from a specific task while aiming for fast online-adaptation for several tasks.To address this problem,our work builds upon the investigation of successor representations for task generalization in online RL and extends the framework to incorporate offline-to-online learning.We demonstrate that the conventional paradigm using successor features cannot effectively utilize offline data and improve the performance for the new task by online fine-tuning.To mitigate this,we introduce a novel methodology that leverages offline data to acquire an ensemble of successor representations and subsequently constructs ensemble Q functions.This approach enables robust representation learning from datasets with different coverage and facilitates fast adaption of Q functions towards new tasks during the online fine-tuning phase.Extensive empirical evaluations provide compelling evidence showcasing the superior performance of our method in generalizing to diverse or even unseen tasks.

Key words

offline reinforcement learning/online fine-tuning/task generalization/successor representations/ensembles

引用本文复制引用

基金项目

National Science Fund for Distinguished Young Scholars(62025602)

National Natural Science Foundation of China(62306242)

National Natural Science Foundation of China(U22B2036)

National Natural Science Foundation of China(11931015)

Fok Ying-Tong Education Foundation China(171105)

Tencent Foundation,XPLORER PRIZE,Science Center Program of National Natural Science Foundation of China(62188101)

Heilongjiang Touyan Innovation Team Program()

出版年

2024
中国科学:信息科学(英文版)
中国科学院

中国科学:信息科学(英文版)

CSTPCDEI
影响因子:0.715
ISSN:1674-733X
段落导航相关论文