Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Inference and estimation in probabilistic time series models
- I Monte Carlo
- II Deterministic approximations
- 5 Two problems with variational expectation maximisation for time series models
- 6 Approximate inference for continuous-time Markov processes
- 7 Expectation propagation and generalised EP methods for inference in switching linear dynamical systems
- 8 Approximate inference in switching linear dynamical systems using Gaussian mixtures
- III Switching models
- IV Multi-object models
- V Nonparametric models
- VI Agent-based models
- Index
- Plate section
- References
6 - Approximate inference for continuous-time Markov processes
from II - Deterministic approximations
Published online by Cambridge University Press: 07 September 2011
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Inference and estimation in probabilistic time series models
- I Monte Carlo
- II Deterministic approximations
- 5 Two problems with variational expectation maximisation for time series models
- 6 Approximate inference for continuous-time Markov processes
- 7 Expectation propagation and generalised EP methods for inference in switching linear dynamical systems
- 8 Approximate inference in switching linear dynamical systems using Gaussian mixtures
- III Switching models
- IV Multi-object models
- V Nonparametric models
- VI Agent-based models
- Index
- Plate section
- References
Summary
Introduction
Markov processes are probabilistic models for describing data with a sequential structure. Probably the most common example is a dynamical system, of which the state evolves over time. For modelling purposes it is often convenient to assume that the system states are not directly observed: each observation is a possibly incomplete, non-linear and noisy measurement (or transformation) of the underlying hidden state. In general, observations of the system occur only at discrete times, while the underlying system is inherently continuous in time. Continuous-time Markov processes arise in a variety of scientific areas such as physics, environmental modelling, finance, engineering and systems biology.
The continuous-time evolution of the system imposes strong constraints on the model dynamics. For example, the individual trajectories of a diffusion process are rough, but the mean trajectory is a smooth function of time. Unfortunately, this information is often under- or unexploited when devising practical systems. The main reason is that inferring the state trajectories and the model parameters is a difficult problem as trajectories are infinite-dimensional objects. Hence, a practical approach usually requires some sort of approximation. For example, Markov chain Monte Carlo (MCMC) methods usually discretise time [41, 16, 34, 2, 20], while particle filters approximate continuous densities by a finite number of point masses [13, 14, 15]. More recently, approaches using perfect simulation have been proposed [7, 8, 18]. The main advantage of these MCMC techniques is that they do not require approximations of the transition density using time discretisations.
- Type
- Chapter
- Information
- Bayesian Time Series Models , pp. 125 - 140Publisher: Cambridge University PressPrint publication year: 2011
References
- 5
- Cited by