首页|Reinforcement Learning-Based Evolving Flight Controller for Fixed-Wing Uncrewed Aircraft
Reinforcement Learning-Based Evolving Flight Controller for Fixed-Wing Uncrewed Aircraft
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
IEEE
A significant challenge in designing flight controllers lies in their dependency on the quality of dynamic models. This research explores the potential of artificial intelligence-based flight controllers to generalize control actions around policies rather than relying solely on the accuracy of dynamic models. An engineering-level, low-fidelity, linearized model of fixed-wing uncrewed aircraft is used to train a multi-input multi-output (MIMO) flight controller, employing the deep deterministic policy gradients (DDPG) algorithm, to maintain cruise velocity and altitude. While existing literature often concentrates on simulation-based assessments of reinforcement learning (RL)-based flight controllers, this research employs an extensive flight test campaign including 15 flight tests to explore the reliability, robustness, and generalization capability of RL algorithms in tasks they were not specifically trained for, such as changing cruise altitude and velocity. The RL controller outperformed a well-tuned linear quadratic regulator (LQR) on several control tasks. Furthermore, a modification in the DDPG algorithm is presented to enhance the ability of RL controllers to evolve through experience gained from actual flights. The evolved controllers present different behavior compared to the original controller. Comparative flight tests underscored the crucial role of the ratio of actual flight data to the number of simulation-based training instances in optimizing the evolved controllers.