首页|ISC-POMDPs: Partially Observed Markov Decision Processes With Initial-State Dependent Costs

ISC-POMDPs: Partially Observed Markov Decision Processes With Initial-State Dependent Costs

扫码查看
We introduce a class of partially observed Markov decision processes (POMDPs) with costs that can depend on both the value and (future) uncertainty associated with the initial state. These Initial-State Cost POMDPs (ISC-POMDPs) enable the specification of objectives relative to a priori unknown initial states, which is useful in applications such as robot navigation, controlled sensing, and active perception, that can involve controlling systems to revisit, remain near, or actively infer their initial states. By developing a recursive Bayesian fixed-point smoother to estimate the initial state that resembles the standard recursive Bayesian filter, we show that ISC-POMDPs can be treated as POMDPs with (potentially) belief-dependent costs. We demonstrate the utility of ISC-POMDPs, including their ability to select controls that resolve (future) uncertainty about (past) initial states, in simulation.

CostsBayes methodsCost functionUncertaintyStandardsRobot sensing systemsMarkov decision processesCurrent measurementVectorsSmoothing methods

Timothy L. Molloy

展开 >

CIICADA Lab, School of Engineering, The Australian National University, Canberra, ACT, Australia

2025

IEEE Control Systems Letters

IEEE Control Systems Letters

ISSN:
年,卷(期):2025.9(1)
  • 15