We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
describe a random variable by the cumulative distribution function and by the probability density function;
compute the expected value, mean, variance, standard deviation, correlation, and covariance of a random variable;
define a Gaussian random signal;
define independent and identically distributed (IID) signals;
describe the concepts of stationarity, wide-sense stationarity, and ergodicity;
compute the power spectrum and the cross-spectrum;
relate the input and output spectra of an LTI system;
describe the stochastic properties of linear least-squares estimates and weighted linear least-squares estimates;
solve the stochastic linear least-squares problem; and
describe the concepts of unbiased, minimum-variance, and maximum-likelihood estimates.
Introduction
In Chapter 3 the response of an LTI system to various deterministic signals, such as a step input, was considered. A characteristic of a deterministic signal or sequence is that it can be reproduced exactly. On the other hand, a random signal, or a sequence of random variables, cannot be exactly reproduced. The randomness or unpredictability of the value of a certain variable in a modeling context arises generally from the limitations of the modeler in predicting a measured value by applying the “laws of Nature.” These limitations can be a consequence of the limits of scientific knowledge or of the desire of the modeler to work with models of low complexity. Measurements, in particular, introduce an unpredictable part because of their finite accuracy.
Filtering and system identification are powerful techniques for building models of complex systems. This 2007 book discusses the design of reliable numerical methods to retrieve missing information in models derived using these techniques. Emphasis is on the least squares approach as applied to the linear state-space model, and problems of increasing complexity are analyzed and solved within this framework, starting with the Kalman filter and concluding with the estimation of a full model, noise statistics and state estimator directly from the data. Key background topics, including linear matrix algebra and linear system theory, are covered, followed by different estimation and identification methods in the state-space model. With end-of-chapter exercises, MATLAB simulations and numerous illustrations, this book will appeal to graduate students and researchers in electrical, mechanical and aerospace engineering. It is also useful for practitioners. Additional resources for this title, including solutions for instructors, are available online at www.cambridge.org/9780521875127.
list the four fundamental subspaces defined by a linear transformation;
compute the inverse, determinant, eigenvalues, and eigenvectors of a square matrix.
describe what positive-definite matrices are;
compute some important matrix decompositions, such as the eigenvalue decomposition, the singular-value decomposition and the QR factorization;
solve linear equations using techniques from linear algebra;
describe the deterministic least-squares problem; and
solve the deterministic least-squares problem in numerically sound ways.
Introduction
In this chapter we review some basic topics from linear algebra. The material presented is frequently used in the subsequent chapters.
Since the 1960s linear algebra has gained a prominent role in engineering as a contributing factor to the success of technological breakthroughs.
Linear algebra provides tools for numerically solving system-theoretic problems, such as filtering and control problems. The widespread use of linear algebra tools in engineering has in its turn stimulated the development of the field of linear algebra, especially the numerical analysis of algorithms. A boost to the prominent role of linear algebra in engineering has certainly been provided by the introduction and widespread use of computer-aided-design packages such as Matlab (MathWorks, 2000b) and SciLab (Gomez, 1999). The user-friendliness of these packages allow us to program solutions for complex system-theoretic problems in just a few lines of code.
describe the prediction-error model-estimation problem;
parameterize the system matrices of a Kalman filter of fixed and known order such that all stable MIMO Kalman filters of that order are presented;
formulate the estimation of the parameters of a given Kalman-filter parameterization via the solution of a nonlinear optimization problem;
evaluate qualitatively the bias in parameter estimation for specific SISO parametric models, such as ARX, ARMAX, output error, and Box–Jenkins models, under the assumption that the signal-generating system does not belong to the class of parameterized Kalman filters; and
describe the problems that may occur in parameter estimation when using data generated in closed-loop operation of the signal-generating system.
Introduction
This chapter continues the discussion started in Chapter 7, on estimating the parameters in an LTI state-space model. It addresses the determination of a model of both the deterministic and the stochastic part of an LTI model.
The objective is to determine, from a finite number of measurements of the input and output sequences, a one-step-ahead predictor given by the stationary Kalman filter without using knowledge of the system and covariance matrices of the stochastic disturbances. In fact, these system and covariance matrices (or alternatively the Kalman gain) need to be estimated from the input and output measurements. Note the difference from the approach followed in Chapter 5, where knowledge of these matrix quantities was used. The restriction imposed on the derivation of a Kalman filter from the data is the assumption of a stationary one-step-ahead predictor of a known order.
derive the data equation that relates block Hankel matrices constructed from input–output data;
exploit the special structure of the data equation for impulse input signals to identify a state-space model via subspace methods;
use subspace identification for general input signals;
use instrumental variables in subspace identification to deal with process and measurement noise;
derive subspace identification schemes for various noise models;
use the RQ factorization for a computationally efficient implementation of subspace identification schemes; and
relate different subspace identification schemes via the solution of a least-squares problem.
Introduction
The problem of identifying an LTI state-space model from input and output measurements of a dynamic system, which we analyzed in the previous two chapters, is re-addressed in this chapter via a completely different approach. The approach we take is indicated in the literature (Verhaegen, 1994; Viberg, 1995; Van Overschee and De Moor, 1996b; Katayama, 2005) as the class of subspace identification methods. These methods are based on the fact that, by storing the input and output data in structured block Hankel matrices, it is possible to retrieve certain subspaces that are related to the system matrices of the signal-generating state-space model. Examples of such subspaces are the column space of the observability matrix, Equation (3.25) on page 67, and the row space of the state sequence of a Kalman filter.
Making observations through the senses of the environment around us is a natural activity of living species. The information acquired is diverse, consisting for example of sound signals and images. The information is processed and used to make a particular model of the environment that is applicable to the situation at hand. This act of model building based on observations is embedded in our human nature and plays an important role in daily decision making.
Model building through observations also plays a very important role in many branches of science. Despite the importance of making observations through our senses, scientific observations are often made via measurement instruments or sensors. The measurement data that these sensors acquire often need to be processed to judge or validate the experiment, or to obtain more information on conducting the experiment. Data are often used to build a mathematical model that describes the dynamical properties of the experiment. System-identification methods are systematic methods that can be used to build mathematical models from measured data. One important use of such mathematical models is in predicting model quantities by filtering acquired measurements.
A milestone in the history of filtering and system identification is the method of least squares developed just before 1800 by Johann Carl Friedrich Gauss (1777–1855). The use of least squares in filtering and identification is a recurring theme in this book. What follows is a brief sketch of the historical context that characterized the early development of the least-squares method.
measure the “size” of a discrete-time signal using norms;
use the z-transform to convert discrete-time signals to the complex z-plane;
use the discrete-time Fourier transform to convert discrete-time signals to the frequency domain;
describe the properties of the z-transform and the discrete-time Fourier transform;
define a discrete-time state-space system;
determine properties of discrete-time systems such as stability, controllability, observability, time invariance, and linearity;
approximate a nonlinear system in the neighborhood of a certain operating point by a linear time-invariant system;
check stability, controllability, and observability for linear time-invariant systems;
represent linear time-invariant systems in different ways; and
deal with interactions between linear time-invariant systems.
Introduction
This chapter deals with two important topics: signals and systems. A signal is basically a value that changes over time. For example, the outside temperature as a function of the time of the day is a signal. More specifically, this is a continuous-time signal; the signal value is defined at every time instant. If we are interested in measuring the outside temperature, we will seldom do this continuously. A more practical approach is to measure the temperature only at certain time instants, for example every minute. The signal that is obtained in that way is a sequence of numbers; its values correspond to certain time instants. Such an ordered sequence is called a discrete-time signal.
use the discrete Fourier transform to transform finite-length time signals to the frequency domain;
describe the properties of the discrete Fourier transform;
relate the discrete Fourier transform to the discrete-time Fourier transform;
efficiently compute the discrete Fourier transform using fast-Fourier-transform algorithms;
estimate spectra from finite-length data sequences;
reduce the variance of spectral estimates using blocked data processing and windowing techniques;
estimate the frequency-response function (FRF) and the disturbance spectrum from finite-length data sequences for an LTI system contaminated by output noise; and
reduce the variance of FRF estimates using windowing techniques.
Introduction
In this chapter the problem of determining a model from input and output measurements is treated using frequency-domain methods. In the previous chapter we studied the estimation of the state given the system and measurements of its inputs and outputs. In this chapter we do not bother about estimating the state. The models that will be estimated are input–output models, in which the state does not occur. More specifically, we investigate how to obtain in a simple and fast manner an estimate of the dynamic transfer function of an LTI system from recorded input and output data sequences taken from that system. We are interested in estimating the frequency-response function (FRF) that relates the measurable input to the measurable output sequence. The FRF has already been discussed briefly in Section 3.4.4 and its estimation is based on Lemma 4.3 via the estimation of the signal spectra of the recorded input and output data. Special attention is paid to the case of practical interest in which the data records have finite data length.
describe the output-error model-estimation problem;
parameterize the system matrices of a MIMO LTI state-space model of fixed and known order such that all stable models of that order are presented;
formulate the estimation of the parameters of a given system parameterization as a nonlinear optimization problem;
numerically solve a nonlinear optimization problem using gradient-type algorithms;
evaluate the accuracy of the obtained parameter estimates via their asymptotic variance under the assumption that the signal-generating system belongs to the class of parameterized state-space models; and
describe two ways for dealing with a nonwhite noise acting on the output of an LTI system when estimating its parameters.
Introduction
After the treatment of the Kalman filter in Chapter 5 and the estimation of the frequency-response function (FRF) in Chapter 6, we move another step forward in our exploration of how to retrieve information about linear time-invariant (LTI) systems from input and output measurements. The step forward is taken by analyzing how we can estimate (part of) the system matrices of the signal-generating model from acquired input and output data. We first tackle this problem as a complicated estimation problem by attempting to estimate both the state vector and the system matrices. Later on, in Chapter 9, we outline the so-called subspace identification methods that solve such problems by means of linear least-squares problems.
Nonparametric models such as the FRF could also be obtained via the simple least-squares method or the computationally more attractive fast Fourier transform.
explain that the identification of an LTI model making use of real-life measurements is more then just estimating parameters in a user-defined model structure;
identify an LTI model in a cyclic manner of iteratively refining data and models and progressively making use of more complex numerical optimization methods;
explain that the identification cycle requires many choices to be made on the basis of cautious experiments, the user's expertise, and prior knowledge about the system to be identified or about systems bearing a close relationship with, or resemblance to, the target system;
argue that a critical choice in system identification is the selection of the input sequence, both in terms of acquiring qualitative information for setting or refining experimental conditions and in terms of accurately estimating models;
describe the role of the notion of persistency of excitation in system identification;
use subspace identification methods to initialize prediction-error methods in identifying state-space models in the innovation form; and
understand that the art of system identification is mastered by applying theoretical insights and methods to real-life experiments and working closely with an expert in the field.
Introduction
In the previous chapters, it was assumed that time sequences of input and output quantities of an unknown dynamical system were given. The task was to estimate parameters in a user-specified model structure on the basis of these time sequences.
This book is intended as a first-year graduate course for engineering students. It stresses the role of linear algebra and the least-squares problem in the field of filtering and system identification. The experience gained with this course at the Delft University of Technology and the University of Twente in the Netherlands has shown that the review of undergraduate study material from linear algebra, statistics, and system theory makes this course an ideal start to the graduate course program. More importantly, the geometric concepts from linear algebra and the central role of the least-squares problem stimulate students to understand how filtering and identification algorithms arise and also to start developing new ones. The course gives students the opportunity to see mathematics at work in solving engineering problems of practical relevance.
The course material can be covered in seven lectures:
(i) Lecture 1: Introduction and review of linear algebra (Chapters 1 and 2)
(ii) Lecture 2: Review of system theory and probability theory (Chapters 3 and 4)
(iii) Lecture 3: Kalman filtering (Chapter 5)
(iv) Lecture 4: Estimation of frequency-response functions (Chapter 6)
(v) Lecture 5: Estimation of the parameters in a state-space model (Chapters 7 and 8)
(vi) Lecture 6: Subspace model identification (Chapter 9)
(vii) Lecture 7: From theory to practice: the system-identification cycle (Chapter 10).
The authors are of the opinion that the transfer of knowledge is greatly improved when each lecture is followed by working classes in which the students do the exercises of the corresponding classes under the supervision of a tutor.