Reinforcement Learning Approach with Environment-Adaptive Gaussian Noise Augmentation
The state vector input-based reinforcement learning approach is currently a fundamental re-search direction in the field of reinforcement learning with broad application prospects.However,the low data efficiency of current reinforcement learning methods leads to prolonged learning times,making it dif-ficult to apply in real-world environments.To address these issues,an environment-adaptive Gaussian noise augmentation(EAGNA)method is proposed,which is integrated as a module into soft actor-critic(SAC)and proximal policy optimization(PPO)methods.This study focuses on the distribution range of each element in the state vector of the task environment and adds Gaussian noise with different means and standard deviations to each element for data augmentation.Across three state-vector-based control tasks in the OpenAI Gym benchmark,EAGNA achieved a higher average return than the original algorithm,en-hancing data efficiency.Notably,in the Lunar Lander control task with complex state inputs,EAGNA outperformed the SAC and PPO methods by 30.52 and 26.09 average returns,respectively.