首页|Parameter continuity in time-varying Gauss-Markov models for learning from small training data sets

Parameter continuity in time-varying Gauss-Markov models for learning from small training data sets

扫码查看
The Linear time-invariant dynamic models are widely adopted in the industry. In the machine learning domain, such models are known as time-invariant continuous-state hidden Gauss-Markov models. Their super-class, the linear time-varying dynamic models, have relatively sparse applications as predictive models and classifiers of time series. This is typically due to the model complexity and the need for a significantly larger training set than time-invariant models. Without a large training set, a better modeling performance is counteracted by a less robust model. In this paper, we propose the continuity preference of the time-varying parameters of the model, which significantly reduces the required amount of training data while maintaining the modeling performance. We also derive a simple modification of the Expectation-Maximization algorithm incorporating continuity in parameters. The modified algorithm shows robust learning performance. The model performance is demonstrated by experiments on real 6-axis robotic manipulators in a laboratory, the Skoda Auto car producer body shop, and also on a public benchmark data set. (C) 2022 Elsevier Inc. All rights reserved.

Gauss-Markov processEM optimizationLTV dynamic modelsKalman filterConsistencySTATE ESTIMATIONSYSTEMSCLASSIFICATIONENERGYINPUT

Ron, Martin、Burget, Pavel、Hlavac, Vaclav

展开 >

Czech Tech Univ

2022

Information Sciences

Information Sciences

EISCI
ISSN:0020-0255
年,卷(期):2022.595
  • 1
  • 50