Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part I Stochastic Models and Bayesian Filtering
- 2 Stochastic state space models
- 3 Optimal filtering
- 4 Algorithms for maximum likelihood parameter estimation
- 5 Multi-agent sensing: social learning and data incest
- Part II Partially Observed Markov Decision Processes: Models and Applications
- Part III Partially Observed Markov Decision Processes: Structural Results
- Part IV Stochastic Approximation and Reinforcement Learning
- Appendix A Short primer on stochastic simulation
- Appendix B Continuous-time HMM filters
- Appendix C Markov processes
- Appendix D Some limit theorems
- References
- Index
2 - Stochastic state space models
from Part I - Stochastic Models and Bayesian Filtering
Published online by Cambridge University Press: 05 April 2016
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part I Stochastic Models and Bayesian Filtering
- 2 Stochastic state space models
- 3 Optimal filtering
- 4 Algorithms for maximum likelihood parameter estimation
- 5 Multi-agent sensing: social learning and data incest
- Part II Partially Observed Markov Decision Processes: Models and Applications
- Part III Partially Observed Markov Decision Processes: Structural Results
- Part IV Stochastic Approximation and Reinforcement Learning
- Appendix A Short primer on stochastic simulation
- Appendix B Continuous-time HMM filters
- Appendix C Markov processes
- Appendix D Some limit theorems
- References
- Index
Summary
This chapter discusses stochastic state space models and how optimal predictors can be constructed to predict the future state of a stochastic dynamic system. Finally, we examine how such predictors converge over large time horizons to a stationary predictor.
Stochastic state space model
The stochastic state space model is the main model used throughout this book. We will also use the phrase partially observed Markov model interchangeably with state space model.
We start by giving two equivalent definitions of a stochastic state space model. The first definition is in terms of a stochastic difference equation. The second definition is presented in terms of the transition kernel of a Markov process and the observation likelihood.
Difference equation form of stochastic state space model
Let k = 0, 1, …, denote discrete time. A discrete-time stochastic state space model is comprised of two random processes {xk} and {yk}:
xk+1 = ϕk (xk,wk), x0 ∼ π0
yk = ψ k(xk, vk).
The difference equation (2.1) is called the state equation. It models the evolution of the state xk of a nonlinear stochastic system – nonlinear since ϕ k(x,w) is any nonlinear function; stochastic since the system is driven by the random process {wk} which denotes the “process noise” or “state noise”. At each time k, the state xk lies in the state space X= ℝX. The initial state at time k = 0, x0 is generated randomly according to prior distribution π0. This is denoted symbolically as x0 ∼ π0.
The observation equation (2.2) models a nonlinear noisy sensor that observes the state process {xk} corrupted by measurement noise {vk}. At each time k, the observation yk is a Y-dimensional vector valued random variable. Note that yk defined by (2.2) is a doubly stochastic process. It is a random function of the state xkwhich itself is a stochastic process evolving according to (2.1).
It is assumed that the state noise process {wk} is an X-dimensional independent and identically distributed (i.i.d.) sequence of random variables.
Also it is assumed that{wk}, {vk} and x0 are independent.
The observation noise process {vk} is assumed to be a Y-dimensional i.i.d. sequence of random variables.
- Type
- Chapter
- Information
- Partially Observed Markov Decision ProcessesFrom Filtering to Controlled Sensing, pp. 11 - 33Publisher: Cambridge University PressPrint publication year: 2016