Skip to main content Accessibility help
  • This chapter is unavailable for purchase
  • Cited by 3
  • Print publication year: 2015
  • Online publication date: April 2015

1 - Introduction


Large-Dimensional Data and New Asymptotic Statistics

In a multivariate analysis problem, we are given a sample x1, x2, …, xn of random observations of dimension p. Statistical methods, such as principal component analysis, have been developed since the beginning of the 20th century. When the observations are Gaussian, some nonasymptotic methods exist, such as Student's test, Fisher's test, or the analysis of variance. However, in most applications, observations are non-Gaussian, at least in part, so that nonasymptotic results become hard to obtain and statistical methods are built using limiting theorems on model statistics.

Most of these asymptotic results are derived under the assumption that the data dimension p is fixed while the sample size n tends to infinity (large sample theory). This theory had been adopted by most practitioners until very recently, when they were faced with a new challenge: the analysis of large dimensional data.

Large-dimensional data appear in various fields for different reasons. In finance, as a consequence of the generalisation of Internet and electronic commerce supported by the exponentially increasing power of computing, online data from markets around the world are accumulated on a giga-octet basis every day. In genetic experiments, such as micro-arrays, it becomes possible to record the expression of several thousand of genes from a single tissue. Table 1.1 displays some typical data dimensions and sample sizes. We can see from this table that the data dimension p is far from the “usual” situations where p is commonly less than 10. We refer to this new type of data as large-dimensional data.

It has been observed for a long time that several well-known methods in multivariate analysis become inefficient or even misleading when the data dimension p is not as small as, say, several tens. A seminal example was provided by Dempster in 1958, when he established the inefficiency of Hotelling's T2 in such cases and provided a remedy (named a non-exact test). However, by that time, no statistician was able to discover the fundamental reasons for such a breakdown in the well-established methods.