Local observation reconstruction for Ad-Hoc cooperation
In recent years,multi-agent reinforcement learning has received a lot of attention from researchers.In the study of multi-agent reinforcement learning,the question of how to perform ad-hoc cooperation,i.e.,how to adapt to a changing variety and number of teammates,is a key problem.Existing methods either have strong prior knowledge assumptions or use hard-coded protocols for cooperation,which lack generality and can not be generalized to more general ad-hoc cooperation scenarios.To address this problem,this paper proposes a local observation reconstruction algorithm for ad-hoc cooperation,which uses attention mechanisms and sampling networks to reconstruct local observations,enabling the algorithm to recognize and make full use of high-dimensional state representations in different situations and achieve zero-shot generalization in ad-hoc cooperation scenarios.In this paper,the performance of the algorithm is compared and analyzed with representative algorithms on the StarCraft micromanagement environment and ad-hoc cooperation scenarios to verify the effectiveness of the algorithm.