Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Part I - Machine learning and kernel vector spaces
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Part I Machine learning and kernel vector spaces
- Part II Dimension-reduction: PCA/KPCA and feature selection
- Part III Unsupervised learning models for cluster analysis
- Part IV Kernel ridge regressors and variants
- Part V Support vector machines and variants
- Part VI Kernel methods for green machine learning technologies
- Part VII Kernel methods and statistical estimation theory
- Part VIII Appendices
- References
- Index
Summary
Chapter 1 provides an overview of the broad spectrum of applications and problem formulations for kernel-based unsupervised and supervised learning methods. The dimension of the original vector space, along with its Euclidean inner product, often proves to be highly inadequate for complex data analysis. In order to provide more effective similarity metrics of any pairs of objects, in the kernel approach one replaces the traditional Euclidean inner product by more sophisticated and kernel-induced inner products associated with the corresponding kernel-induced vector spaces. Among the most useful such spaces are the (primal) intrinsic and (dual) empirical spaces.
The interplay between the formulations of learning models in the primal/dual spaces plays a key role both in the theoretical analysis and in the practical implementation of kernel methods. Chapter 1 shows that a vital condition for kernelization of a learning model is the LSP condition, which is often verifiable via Theorem 1.1. In fact, the optimization formulation prescribed in Theorem 1.1 covers most, if not all, of the ℓ2-based learning models treated in this book – both unsupervised and supervised.
Chapter 2 starts with the vital Theorem 2.1, which states that Mercer's condition of the kernel function used will be imperative for the existence of the kernel-induced vector spaces. For vectorial data analysis, in the first stage, the original vector space can be mapped to the kernel-induced intrinsic vector space. Here, every individual object is represented by a (possibly high-dimensional) feature vector in intrinsic space.
- Type
- Chapter
- Information
- Kernel Methods and Machine Learning , pp. 1 - 2Publisher: Cambridge University PressPrint publication year: 2014