Reinforcement Learning Power Grid Dispatching Method Considering the Pre-state of Intelligent Agent and Adaptive Mechanism of Environmental Features
The high proportion of renewable energy integration makes it difficult to predict and control power grid trends,posing new challenges to the safe and stable operation of the power grid.Compared to traditional control modes,intelli-gent scheduling methods represented by reinforcement learning can cope with sequential decision-making problems in partially observable grid environments,but are prone to poor adaptability when the proportion of renewable energy in the grid changes.In response to this issue,the Actor-Critic is taken as the basic framework,the pre-state is used to represent the state of the intelligent agent,an adaptive mechanism for environmental features is introduced,and the above opera-tions are used for power grid scheduling tasks in scenarios where the proportion of renewable energy changes.Due to the influence of exogenous random events such as source load fluctuations on the state of the power grid after dispatch ac-tions,a state space explosion problem may easily arise.Using the pre-state representation of the intelligent agent state before power flow calculation can effectively reduce the state space.The introduction of adaptive mechanisms with uni-versal environmental features can effectively avoid the problem of"decision forgetting",thereby improving the adaptability of intelligent agents to changes in the proportion of renewable energy in the power grid.The simulation ex-perimental results show that this method performs well in terms of convergence speed and control stability in 118 node power grid scheduling tasks with dynamic changes in renewable energy proportion.