Coordinated Variable Speed Limit Control for Freeway Based on Multi-Agent Deep Reinforcement Learning
In order to meet the needs of coordinated variable speed limit(VSL)control of multi-segment on freeways,and to solve the problem of efficient training optimization in high-dimensional parameter space,a multi-agent deep deterministic policy gradient(MADDPG)algorithm is proposed for freeway VSL control.Different from the existing research on the single agent Deep Deterministic Policy Gradient(DDPG)algorithm,MADDPG abstracts each control unit as an agent with Actor-Critic reinforcement learning architecture,and shares each agent in the algorithm training process.The state and action information of the agents enable each agent to have the ability to infer the control strategies of other agents,thereby realizing multi-segment coordinated control.Based on the open source simulation software SUMO,the effect of the control method proposed is verified in a typical freeway traffic jam scenario.The experimental results show that the MADDPG algorithm proposed reduces the traffic jam duration and the speed standard deviation by 69.23%and 47.96%respectively,which can significantly improve the traffic efficiency and safety.Compared with the single-agent DDPG algorithm,MADDPG can save 50%of the training time and increase the cumulative return value by 7.44%.The multi-agent algorithm can improve the optimization efficiency of the collaborative control strategy.Further,in order to verify the necessity of sharing information among agents,MADDPG is compared with the independent DDPG(IDDPG)algorithm:It is shown that MADDPG can improve the traffic jam duration and speed standard deviation by 11.65%,19.00%respectively.