Published online by Cambridge University Press: 05 June 2012
One of the earliest problems that faced time series analysts was the modelling of long-term persistence, or trends, in the observed data. A major motivation for studying the trend properties of time series was the belief that long-term components should be removed in order to uncover any remaining short-term regularities. Until the 1980s the dominant view was that these properties could be well described by the deterministic functions of a time index. This approach was pioneered by Jevons in the mid-nineteenth century and was popularised fifty years later by Persons with the celebrated ‘Harvard A-B-C curve’ methodology of stock market prediction (see Samuelson, 1987, and Morgan, 1990). At present, the dominant paradigm in economic and financial time series modelling builds upon the random walk model, first introduced into finance by Bachelier (1900), where, as we have seen in chapter 2, stochastic trends arise from the accumulation of random changes.
These two approaches constitute the main apparatus for the analysis of non-stationary time series – i.e. of time series that, broadly speaking, wander without bound and origin and do not have well-defined or finite unconditional moments. The first approach deals with trend stationary processes, a class of non-autonomous processes, which can be made stationary by removing a deterministic trend in the form of a polynomial in time. The second approach deals with random walks, which are produced from the accumulation (or integration in continuous time) of white-noise random variables.