Published online by Cambridge University Press: 05 July 2014
This part contains three chapters with the focus placed on support vector machines (SVM). The SVM learning model lies right at the heart of kernel methods. It has been a major driving force of modern machine learning technologies.
Chapter 10 is focused on the basic SVM learning theory, which relies on the identification of a set of “support vectors” via the well-known Karush–Kuhn–Tucker (KKT) condition. The “support vectors” are solely responsible for the formation of the decision boundary. The LSP is obviously valid for SVM learning models and the kernelized SVM learning models have exactly the same form for linear and nonlinear problems. This is evidenced by Algorithm 10.1.
Chapter 11 covers support-vector-based learning models aiming at outlier detection. The support vector regression (SVR), see Algorithm 11.1, aims at finding an approximating function to fit the training data under the guidance of teacher values. The chapter explores, in addition, several SVM-based learning models for outlier detection, including hyperplane OCSVM (Algorithm 11.2), hypersphere OCSVM (Algorithm 11.3), and SVC. For all these learning models, the fraction of outliers can be analytically estimated – a sharp contrast to the other SVM learning models. In fact, for Gaussian kernels, it can be shown that all three algorithms coincide with each other. However, when polynomial kernels are adopted, the translation-invariance property is a legitimate concern for the hyperplane-OCSVM learning models.
Chapter 12 introduces the notion of a weight–error curve (WEC) for characterization of kernelized supervised learning models, including KDA, KRR, SVM, and Ridge-SVM.