With the advent of high-throughput technologies such as multicolor flow cytometry and next-generation sequencing, high-dimensional data has become increasingly common in biomedical research. In many applications such as proteomics, genomics, and immunology, the data has become increasingly wide. That is, we know a great deal about a small number of subjects. In these applications, the number of features greatly exceeds the number of observations. This “large p, small n” problem gives rise to a number of well-known statistical issues.
For example, a researcher may be interested in discovering a certain single nucleotide polymorphism (SNP) that is associated with a particular disease outcome. Note that a nucleotide is a subunit of the DNA molecule, and each nucleotide consists of either an A (adenine), T/U (thymine/uracil), G (guanine), or C (cytosine), with base pairs formed between the former and latter two nucleotides. A SNP is a single base pair such that the nucleotides differ between members of a population.
For example, 99% of patients may carry two A alleles, AA. One percent of the population may have different nucleotides on one or more alleles, say AG or GG at this particular base pair. In this case, A would be the common allele. The feature corresponding to this SNP may count the number of rare alleles (0, 1, or 2). Alternatively, the features may record whether an observation has alleles AA, AG, or GG. This is achieved with two dummy variables. In this case, there are two features corresponding to each SNP.
In many cases, there may be hundreds of thousands of potentially important SNPs and only a few hundred subjects. To further complicate matters, these SNPs may be highly correlated with one another. The SNPs may be more or less variable across members of the population. Some SNPs may be missing for a significant portion of the observations. Additional demographic features, such as gender or race, may confound the relationship between a SNP and a disease outcome. In many of these applications, prediction may be secondary to feature selection. That is, a researcher may want to know the top variables of interest for further study.
Classical regression methods such as linear regression or logistic regression are highly unstable if p is comparable to n, and they altogether fail to yield a result if p is larger than n.