首页|基于观测重构的多智能体强化学习方法

基于观测重构的多智能体强化学习方法

扫码查看
共同知识是多智能体系统内众所周知的知识集.如何充分利用共同知识进行策略学习,是多智能体独立学习系统中的一个挑战性问题.针对这一问题,围绕共同知识提取和独立学习网络设计,提出了 一种基于观测重构的多智能体强化学习方法IPPO-CKOR.首先,对智能体的观测信息进行共同知识特征的计算与融合,得到融合共同知识特征的观测信息;其次,采用基于共同知识的智能体选择算法,选择关系密切的智能体,并使用重构特征生成机制构建它们的特征信息,其与融合共同知识特征的观测信息组成重构观测信息,用于智能体策略的学习与执行;最后,设计了 一个基于观测重构的独立学习网络,使用多头自注意力机制对重构观测信息进行处理,使用一维卷积和GRU层处理观测信息序列,使得智能体能够从观测信息序列中提取出更有效的特征,有效缓解了环境非平稳与部分可观测问题带来的影响.实验结果表明,相较于现有典型的采用独立学习的多智能体强化学习方法,所提方法在性能上有显著提升.
Multi-agent Reinforcement Learning Method Based on Observation Reconstruction
Common knowledge is a well-known knowledge set within a multi-agent system.How to make full use of common knowledge for strategic learning is a challenging problem in multi-agent independent learning systems.In addressing this pro-blem,this paper proposes a multi-agent reinforcement learning method called IPPO-CKOR based on observation reconstruction,focusing on common knowledge extraction and independent learning network design.Firstly,the common knowledge features of agents'observation information are computed and fused to obtain fused observation information with common knowledge fea-tures.Secondly,an agent selection algorithm based on common knowledge is used to select closely related agents,and a feature generation mechanism based on reconstruction is employed to construct their feature information.The reconstructed observation information,composed of the fused observation information with common knowledge features,is utilized for learning and execu-ting agent policies.Thirdly,a network structure based on observation reconstruction is designed,which employs multi-head self-attention mechanism to process the reconstructed observation information and uses one-dimensional convolution and GRU layers to handle observation information sequences.This enables the agents to extract more effective features from the observation infor-mation sequences,effectively alleviating the impact of non-stationary environments and partially observable problems.Experimen-tal results demonstrate that the proposed method outperforms existing typical multi-agent reinforcement learning methods that employ independent learning in terms of performance.

Observation reconstructionMulti-agent cooperative strategyMulti-agent reinforcement learningIndependent learning

史殿习、胡浩萌、宋林娜、杨焕焕、欧阳倩滢、谭杰夫、陈莹

展开 >

智能博弈与决策实验室 北京 100091

天津(滨海)人工智能创新中心 天津 300457

国防科技大学计算机学院 长沙 410073

国防科技创新研究院 北京 100071

展开 >

观测重构 多智能体协作策略 多智能体强化学习 独立学习

科技部科技创新2030重大项目国家自然科学基金

2020AAA010480291948303

2024

计算机科学
重庆西南信息有限公司(原科技部西南信息中心)

计算机科学

CSTPCD北大核心
影响因子:0.944
ISSN:1002-137X
年,卷(期):2024.51(4)
  • 23