Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- 1 Fundamentals of kernel-based machine learning
- 2 Kernel-induced vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
2 - Kernel-induced vector spaces
from Part I - Machine learning and kernel vector spaces
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- 1 Fundamentals of kernel-based machine learning
- 2 Kernel-induced vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Summary
Introduction
The notion of kernel-induced vector spaces is the cornerstone of kernel-based machine learning. Generalization of the traditional definition of a similarity metric plays a vital role in facilitating the analysis of complex and big data. It is often necessary to generalize the traditional Euclidean inner product to the more flexible and nonlinear inner products characterized by properly chosen kernel functions. The new inner product leads to a new distance metric, allowing the data analysis to be effectively performed in a higher-dimensional vector space. The topics addressed in this chapter are as follows.
Section 2.2 introduces Mercer's fundamental theorem stating the necessary and sufficient condition for a function be a Mercer kernel. It will examine several prominent kernel functions, including the polynomial and Gaussian kernel functions.
Section 2.3 introduces the notion of intrinsic space associated with a kernel function. The intrinsic space is so named because it is independent of the training dataset. The dimension of the space is denoted by J and will be referred to as the intrinsic degree. This degree, whether finite or infinite, dictates the training efficiency and computational cost.
Section 2.4 introduces a finite-dimensional kernel-induced vector space, known as empirical space, which is jointly determined by the kernel function and the training dataset. The dimension of the empirical space is equal to the data size N. With the LSP condition, both the intrinsic-space and the kernelized learning model will be at our disposal.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 44 - 76Publisher: Cambridge University PressPrint publication year: 2014