首页|融合自适应评判的随机系统数据驱动策略优化

融合自适应评判的随机系统数据驱动策略优化

扫码查看
自适应评判技术已经广泛应用于求解复杂非线性系统的最优控制问题,但利用其求解离散时间非线性随机系统的无限时域最优控制问题还存在一定局限性.本文融合自适应评判技术,建立一种数据驱动的离散随机系统折扣最优调节方法.首先,针对宽松假设下的非线性随机系统,研究带有折扣因子的无限时域最优控制问题.所提的随机系统Q-learn-ing算法能够将初始的容许策略单调不增地优化至最优策略.基于数据驱动思想,随机系统Q-learning算法在不建立模型的情况下直接利用数据进行策略优化.其次,利用执行-评判神经网络方案,实现了随机系统Q-learning算法.最后,通过两个基准系统,验证本文提出的随机系统Q-learning算法的有效性.
Data-driven Policy Optimization for Stochastic Systems Involving Adaptive Critic
Adaptive critic technology has been widely employed to solve the optimal control problems of complic-ated nonlinear systems,but there are some limitations to solve the infinite-horizon optimal problems of discrete-time nonlinear stochastic systems.In this paper,we establish a data-driven discounted optimal regulation method for dis-crete-time stochastic systems involving adaptive critic technology.First,we investigate the infinite-horizon optimal problems with the discount factor for stochastic systems under the relaxed assumption.The developed stochastic Q-learning algorithm can optimize an initial admissible policy to the optimal one in a monotonically nonincreasing way.Based on the data-driven idea,the policy optimization of the stochastic Q-learning algorithm is executed without a dynamic model.Then,the stochastic Q-learning algorithm is implemented by utilizing the actor-critic neural networks.Finally,two nonlinear benchmarks are given to demonstrate the overall performance of the de-veloped stochastic Q-learning algorithm.

Adaptive critic designdata-drivendiscrete-time systemsneural networksQ-learningstochastic op-timal control

王鼎、王将宇、乔俊飞

展开 >

北京工业大学信息学部 北京 100124

计算智能与智能系统北京市重点实验室 北京 100124

北京人工智能研究院 北京 100124

智慧环保北京实验室 北京 100124

展开 >

自适应评判设计 数据驱动 离散系统 神经网络 Q-learning 随机最优控制

国家自然科学基金国家自然科学基金国家自然科学基金科技创新新一代人工智能重大项目(2030)科技创新新一代人工智能重大项目(2030)

6222230161890930-5620210032021ZD01123022021ZD0112301

2024

自动化学报
中国自动化学会 中国科学院自动化研究所

自动化学报

CSTPCD北大核心
影响因子:1.762
ISSN:0254-4156
年,卷(期):2024.50(5)
  • 5