Published online by Cambridge University Press: 05 July 2014
In Chapter 8, it is shown that the kernel ridge regressor (KRR) offers a unified treatment for over-determined and under-determined systems. Another way of achieving unification of these two linear systems approaches is by means of the support vector machine (SVM) learning model proposed by Vapnik [41, 280, 281].
Just like FDA, the objective of SVM aims at the separation of two classes. FDA is focused on the separation of the positive and negative centroids with the total data distribution taken into account. In contrast, SVM aims at the separation of only the so-called support vectors, i.e. only those which are deemed critical for class separation.
Just like ridge regression, the objective of the SVM classifier also involves minimization of the two-norm of the decision vector.
The key component in SVM learning is to identify a set of representative training vectors deemed to be most useful for shaping the (linear or nonlinear) decision boundary. These training vectors are called “support vectors.” The rest of the training vectors are called non-support vectors. Note that only support vectors can directly take part in the characterization of the decision boundary of the SVM.
SVM has successfully been applied to an enormously broad spectrum of application domains, including signal processing and classification, image retrieval, multimedia, fault detection, communication, computer vision, security/authentication, time-series prediction, biomedical prediction, and bioinformatics.