Skip to main content Accessibility help
×
Hostname: page-component-7479d7b7d-m9pkr Total loading time: 0 Render date: 2024-07-12T06:15:21.269Z Has data issue: false hasContentIssue false

7 - Expectation propagation and generalised EP methods for inference in switching linear dynamical systems

from II - Deterministic approximations

Published online by Cambridge University Press:  07 September 2011

Onno Zoeter
Affiliation:
Xerox Research Centre Europe, Meylan
Tom Heskes
Affiliation:
Radboud University Nijmegen
David Barber
Affiliation:
University College London
A. Taylan Cemgil
Affiliation:
Boğaziçi Üniversitesi, Istanbul
Silvia Chiappa
Affiliation:
University of Cambridge
Get access

Summary

Introduction

Many real-world problems can be described by models that extend the classical linear Gaussian dynamical system with (unobserved) discrete regime indicators. In such extended models the discrete indicators dictate what transition and observation model the process follows at a particular time. The problems of tracking and estimation in models with manoeuvring targets [1], multiple targets [25], non-Gaussian disturbances [15], unknown model parameters [9], failing sensors [20] and different trends [8] are all examples of problems that have been formulated in a conditionally Gaussian state space model framework. Since the extended model is so general it has been invented and re-invented many times in multiple fields, and is known by many different names, such as switching linear dynamical system, conditionally Gaussian state space model, switching Kalman filter model and hybrid model.

Although the extended model has a lot of expressive power, it is notorious for the fact that exact estimation of posteriors is intractable. In general, exact filtered, smoothed or predicted posteriors have a complexity exponential in the number of observations. Even when only marginals on the indicator variables are required the problem remains NP-hard [19].

In this chapter we introduce a deterministic approximation scheme that is particularly suited to find smoothed one and two time slice posteriors. It can be seen as a symmetric backward pass and iteration scheme for previously proposed assumed density filtering approaches [9].

The chapter is organised as follows. In Section 7.2 we present the general model; variants where only the transition or only the observation model switch, or where states or observations are multi-or univariate can be treated as special cases.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Y., Bar-Shalom and X.-R., Li. Estimation and Tracking: Principles, Techniques, and Software. Artech House, 1993.Google Scholar
[2] Y., Bar-Shalom and T., Fortmann. Tracking and Data Association. Academic Press, 1988.Google Scholar
[3] X., Boyen and D., Koller. Tractable inference for complex stochastic processes. In Proceedings of the 14th Annual Conference on Uncertainty in Artificial Intelligence, pages 33–42 Morgan Kaufmann Publishers, 1998.Google Scholar
[4] C., Carter and R., Kohn. Markov chain Monte Carlo in conditionally Gaussian state space models. Biometrika, 83(3):589–601, 1996.Google Scholar
[5] R., Chen and J. S., Liu. Mixture Kalman filters. Journal of the Royal Statistical Society, Series B, 62:493–508, 2000.Google Scholar
[6] A., Doucet, N., de Freitas, K., Murphy and S., Russel. Rao-Blackwellized particle filtering for dynamic Bayesian networks. In Proceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence pages 176–183. Morgan Kaufmann Publishers, 2001.Google Scholar
[7] Z., Ghahramani and G. E., Hinton. Variational learning for switching state-space models. Neural Computation, 12(4):963–996, 1998.Google Scholar
[8] J., Hamilton. A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica, 57(2):357–384, 1989.Google Scholar
[9] P. J., Harrison and C. F., Stevens. Bayesian forecasting. Journal of the Royal Statistical Society Society B, 38:205–247, 1976.Google Scholar
[10] R. E., Helmick, W. D., Blair and S. A., Hoffman. Fixed-interval smoothing for Markovian switching systems. IEEE Transactions on Information Theory, 41:1845–1855, 1995.Google Scholar
[11] T., Heskes and O., Zoeter. Expectation propagation for approximate inference in dynamic Bayesian networks. In Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence, pages 216–223. Morgan Kaufmann Publishers, 2002.Google Scholar
[12] T., Heskes and O., Zoeter. Generalized belief propagation for approximate inference in hybrid Bayesian networks. In C., Bishop and B., Frey, editors, Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003.Google Scholar
[13] H. J., Kappen and W., Wiegerinck. Novel iteration schemes for the cluster variation method. In Advances in Neural Information Processing Systems 14, pages 415–422. MIT Press, 2002.Google Scholar
[14] C.-J., Kim and C. R., Nelson. State-Space Models with Regime Switching. MIT Press, 1999.Google Scholar
[15] G., Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996.Google Scholar
[16] F. R., Kschischang, B. J., Frey and H.-A., Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498–519, 2001.Google Scholar
[17] S., Kullback and R. A., Leibler. On information and suffciency. Annals of Mathematical Statistics, 22(1):76–86, 1951.Google Scholar
[18] S. L., Lauritzen. Propagation of probabilities, means, and variances in mixed graphical association models. Journal of the American Statistical Association, 87:1098–1108, 1992.Google Scholar
[19] U., Lerner and R., Parr. Inference in hybrid networks: Theoretical limits and practical algorithms. In Proceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence, pages 310–318. Morgan Kaufmann Publishers, 2001.Google Scholar
[20] U., Lerner, R., Parr, D., Koller and G., Biswas. Bayesian fault detection and diagnosis in dynamic systems. In Proceedings of the 17th National Conference on Artificial Intelligence, pages 531–537, 2000.Google Scholar
[21] T., Minka. Expectation propagation for approximate Bayesian inference. In Proceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence (UAI 2001). Morgan Kaufmann Publishers, 2001.Google Scholar
[22] T., Minka and Y., Qi. Tree-structured approximations by expectation propagation. In Advances in Neural Information Processing Systems, pages 193–200. MIT Press, 2004.Google Scholar
[23] Y., Qi and T., Minka. Expectation propagation for signal detection in flat-fading channels. IEEE transactions on Wireless Communications, 6:348–355, 2007.Google Scholar
[24] N., Shephard. Partial non-Gaussian state space models. Biometrika, 81:115–131, 1994.Google Scholar
[25] R. H., Shumway and D. S., Stoffer. Dynamic linear models with switching. Journal of the American Statistical Association, 86:763–769, 1991.Google Scholar
[26] M., Welling, T., Minka and Y. W., Teh. Structured region graphs: Morphing EP into GBP. In Proceedings of the Twenty-First Conference Annual Conference on Uncertainty in Artificial Intelligence, pages 609–614. AUAI Press, 2005.Google Scholar
[27] J., Whittaker. Graphical Models in Applied Multivariate Statistics. John Wiley and Sons, 1989.Google Scholar
[28] J., Yedidia, W., Freeman and Y., Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems 13, pages 689–695, MIR Press, 2001.Google Scholar
[29] J., Yedidia, W., Freeman and Y., Weiss. Constructing free energy approximations and generalized belief propagation algorithms. Technical report, MERL, 2004.
[30] O., Zoeter and T., Heskes. Change point problems in linear dynamical systems. Journal of Machine Learning Research, 6:1999–2026, 2005.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×