Abstract
Many multi-agent scenarios require message sharing among agents to promote coordination,hastening the robustness of multi-agent communication when policies are deployed in a message perturbation environment.Major relevant studies tackle this issue under specific assumptions,like a limited number of message channels would sustain perturbations,limiting the efficiency in complex scenarios.In this paper,we take a further step in addressing this issue by learning a robust cooperative multi-agent reinforcement learning via multi-view message certification,dubbed CroMAC.Agents trained under CroMAC can obtain guaranteed lower bounds on state-action values to identify and choose the optimal action under a worst-case deviation when the received messages are perturbed.Concretely,we first model multi-agent communication as a multi-view problem,where every message stands for a view of the state.Then we extract a certificated joint message representation by a multi-view variational autoencoder(MVAE)that uses a product-of-experts inference network.For the optimization phase,we do perturbations in the latent space of the state for a certificate guarantee.Then the learned joint message representation is used to approximate the certificated state representation during training.Extensive experiments in several cooperative multi-agent benchmarks validate the effectiveness of the proposed CroMAC.
基金项目
国家重点研发计划(2020AAA0107200)
国家自然科学基金(61921006)
国家自然科学基金(61876119)
国家自然科学基金(62276126)
江苏省自然科学基金(BK20221442)
Program B for Outstanding PhD Candidate of Nanjing University()