首页|Sample-Efficient Reinforcement Learning With Temporal Logic Objectives: Leveraging the Task Specification to Guide Exploration

Sample-Efficient Reinforcement Learning With Temporal Logic Objectives: Leveraging the Task Specification to Guide Exploration

扫码查看
In this article, we address the problem of learning optimal control policies for systems with uncertain dynamics and high-level control objectives specified as linear temporal logic (LTL) formulas. Uncertainty is considered in the workspace structure and the outcomes of control decisions giving rise to an unknown Markov decision process (MDP). Existing reinforcement learning (RL) algorithms for LTL tasks typically rely on exploring a product MDP state-space uniformly (using e.g., an $\epsilon$-greedy policy) compromising sample-efficiency. This issue becomes more pronounced as the rewards get sparser and the MDP size or the task complexity increase. In this article, we propose an accelerated RL algorithm that can learn control policies significantly faster than competitive approaches. Its sample-efficiency relies on a novel task-driven exploration strategy that biases exploration toward directions that may contribute to task satisfaction. We provide theoretical analysis and extensive comparative experiments demonstrating the sample-efficiency of the proposed method. The benefit of our method becomes more evident as the task complexity or the MDP size increases.

ModelingComputational modelingRobotsProbabilistic logicMarkov decision processesHeuristic algorithmsComplexity theoryUncertaintyLearning automataStochastic processes

Yiannis Kantaros、Jun Wang

展开 >

Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, USA

2025

IEEE transactions on automatic control

IEEE transactions on automatic control

ISSN:
年,卷(期):2025.70(5)
  • 51