Published online by Cambridge University Press: 05 July 2014
The traditional curse of dimensionality is often focused on the extreme dimensionality of the feature space, i.e. M. However, for kernelized learning models for big data analysis, the concern is naturally shifted to the extreme dimensionality of the kernel matrix, N, which is dictated by the size of the training dataset. For example, in some biomedical applications, the sizes may be hundreds of thousands. In social media applications, the sizes could be easily of the order of millions. This creates a new large-scale learning paradigm, which calls for a new level of computational tools, both in hardware and in software.
Given the kernelizability, we have at our disposal two learning models, respectively represented by two different kernel-induced vector spaces. Now our focus of attention should be shifted to the interplay between two kernel-induced representations. Even though the two models are theoretically equivalent, they could incur very different implementation costs for learning and prediction. For cost-effective system implementation, one should choose the lower-cost representation, whether intrinsic or empirical. For example, if the dimension of the empirical space is small and manageable, an empirical-space learning models will be more appealing. However, this will just be the opposite if the number of training vectors is extremely large, which is the case for the “big data” learning scenario. In this case, one must give a serious consideration to the intrinsic model whose cost can be controlled by properly adjusting the order of the kernel function.