Book contents
- Frontmatter
- Contents
- Contributors
- Preface
- 1 Scaling Up Machine Learning: Introduction
- Part One Frameworks for Scaling Up Machine Learning
- Part Two Supervised and Unsupervised Learning Algorithms
- 6 PSVM: Parallel Support Vector Machines with Incomplete Cholesky Factorization
- 7 Massive SVM Parallelization Using Hardware Accelerators
- 8 Large-Scale Learning to Rank Using Boosted Decision Trees
- 9 The Transform Regression Algorithm
- 10 Parallel Belief Propagation in Factor Graphs
- 11 Distributed Gibbs Sampling for Latent Variable Models
- 12 Large-Scale Spectral Clustering with Map Reduce and MPI
- 13 Parallelizing Information-Theoretic Clustering Methods
- Part Three Alternative Learning Settings
- Part Four Applications
- Subject Index
- References
9 - The Transform Regression Algorithm
from Part Two - Supervised and Unsupervised Learning Algorithms
Published online by Cambridge University Press: 05 February 2012
- Frontmatter
- Contents
- Contributors
- Preface
- 1 Scaling Up Machine Learning: Introduction
- Part One Frameworks for Scaling Up Machine Learning
- Part Two Supervised and Unsupervised Learning Algorithms
- 6 PSVM: Parallel Support Vector Machines with Incomplete Cholesky Factorization
- 7 Massive SVM Parallelization Using Hardware Accelerators
- 8 Large-Scale Learning to Rank Using Boosted Decision Trees
- 9 The Transform Regression Algorithm
- 10 Parallel Belief Propagation in Factor Graphs
- 11 Distributed Gibbs Sampling for Latent Variable Models
- 12 Large-Scale Spectral Clustering with Map Reduce and MPI
- 13 Parallelizing Information-Theoretic Clustering Methods
- Part Three Alternative Learning Settings
- Part Four Applications
- Subject Index
- References
Summary
Massive training datasets, ranging in size from tens of gigabytes to several terabytes, arise in diverse machine learning applications in areas such as text mining of web corpora, multimedia analysis of image and video data, retail modeling of customer transaction data, bioinformatic analysis of genomic and microarray data, medical analysis of clinical diagnostic data such as functional magnetic resonance imaging (fMRI) images, and environmental modeling using sensor and streaming data. Provost and Kolluri (1999) in their overview of machine learning with massive datasets, emphasize the need for developing parallel algorithms and implementations for these applications.
In this chapter, we describe the Transform Regression (TReg) algorithm (Pednault, 2006), which is a general-purpose, non-parametric methodology suitable for a wide variety of regression applications. TReg was originally created for the data mining component of the IBM InfoSphere Warehouse product, guided by a challenging set of requirements:
The modeling time should be comparable to linear regression.
The resulting models should be compact and efficient to apply.
The model quality should be reliable without any further tuning.
The model training and scoring should be parallelized for large datasets stored as partitioned tables in IBM's DB2 database systems.
Requirements 1 and 2 were deemed necessary for a successful commercial algorithm, although this ruled out certain ensemble-based methods that produce highquality models but have high computation and storage requirements. Requirement 3 ensured that the chosen algorithm did not unduly compromise the concomitant model quality in view of requirements 1 and 2.
- Type
- Chapter
- Information
- Scaling up Machine LearningParallel and Distributed Approaches, pp. 170 - 189Publisher: Cambridge University PressPrint publication year: 2011