Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-xtgtn Total loading time: 0 Render date: 2024-04-19T15:47:15.142Z Has data issue: false hasContentIssue false

4 - IBM Parallel Machine Learning Toolbox

from Part One - Frameworks for Scaling Up Machine Learning

Published online by Cambridge University Press:  05 February 2012

Edwin Pednault
Affiliation:
IBM Research, Yorktown Heights, NY, USA
Elad Yom-Tov
Affiliation:
Yahoo! Research, New York, NY, USA
Amol Ghoting
Affiliation:
IBM Research, Yorktown Heights, NY, USA
Ron Bekkerman
Affiliation:
LinkedIn Corporation, Mountain View, California
Mikhail Bilenko
Affiliation:
Microsoft Research, Redmond, Washington
John Langford
Affiliation:
Yahoo! Research, New York
Get access

Summary

In many ways, the objective of the IBM Parallel Machine Learning Toolbox (PML) is similar to that of Google's MapReduce programming model (Dean and Ghemawat, 2004) and the open source Hadoop system, which is to provide Application Programming Interfaces (APIs) that enable programmers who have no prior experience in parallel and distributed systems to nevertheless implement parallel algorithms with relative ease. Like MapReduce and Hadoop, PML supports associative-commutative computations as its primary parallelization mechanism. Unlike MapReduce and Hadoop, PML fundamentally assumes that learning algorithms can be iterative in nature, requiring multiple passes over data. It also extends the associative-commutative computational model in various aspects, the most important of which are:

  1. The ability to maintain the state of each worker node between iterations, making it possible, for example, to partition and distribute data structures across workers

  2. Efficient distribution of data, including the ability for each worker to read a subset of the data, to sample the data, or to scan the entire dataset

  3. Access to both sparse and dense datasets

  4. Parallel merge operations using tree structures for efficient collection of worker results on very large clusters

In order to make these extensions to the computational model and still address ease of use, PML provides an object-oriented API in which algorithms are objects that implement a predefined set of interface methods. The PML infrastructure then uses these interface methods to distribute algorithm objects and their computations across multiple compute nodes.

Type
Chapter
Information
Scaling up Machine Learning
Parallel and Distributed Approaches
, pp. 69 - 88
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Agrawal, R., and Shafer, J. 1996. Parallel Mining of Association Rules. IEEE Transactions on Knowledge and Data Engineering.CrossRefGoogle Scholar
Agrawal, R., and Srikant, R. 1994. Fast Algorithms for Mining Association Rules. In: Proceedings of the International Conference on Very Large Data Bases (VLDB).Google Scholar
Agrawal, R., and Srikant, R. 1995. Mining Sequential Patterns. In: Proceedings of the International Conference on Data Engineering (ICDE).Google Scholar
Agrawal, R., Imielinski, T., and Swami, A. 1993. Mining Association Rules between Sets of Items in Large Databases. In: Proceedings of the International Conference on Management of Data (SIGMOD).Google Scholar
AlSabti, K., Ranka, S., and Singh, V. 1998 (August). CLOUDS: Classification for Large or Out-of-Core Datasets. In: Conference on Knowledge Discovery and Data Mining.Google Scholar
Apte, C., Grossman, E., Pednault, E., Rosen, B., Tipu, F., and White, B. 1999. Probabilistic Estimation Based Data Mining for Discovering Insurance Risks. IEEE Intelligent Systems, 14(6), 49–58.CrossRefGoogle Scholar
Apte, C., Bibelnieks, E., Natarajan, R., Pednault, E., Tipu, F., Campbell, D., and Nelson, B. 2001. Segmentation-BasedModeling for Advanced Targeted Marketing. Pages 408–413 of: Proceedings of the Seventh ACMSIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM.CrossRefGoogle Scholar
Apte, C., Natarajan, R., Pednault, E. P. D., and Tipu, F. 2002. A Probabilistic Estimation Framework for Predictive Modeling Analytics. IBM Systems Journal, 41(3), 438–448.CrossRefGoogle Scholar
Ben-Haim, Y., and Yom-Tov, E. 2010. A Streaming Parallel Decision Tree Algorithm. Journal of Machine Learning Research, 11, 789–812.Google Scholar
Brin, S., Motwani, R., and Silverstein, C. 1997. Beyond Market Basket: Generalizing Association Rules to Correlations. In: Proceedings of the International Conference on Management of Data (SIGMOD).Google Scholar
Dean, J., and Ghemawat, S. 2004. MapReduce: Simplified Data Processing on Large Clusters. In: Proceedings of the Symposium on Operating System Design and Implementation.Google Scholar
Dong, G., and Li, J. 1999. Efficient Mining of Emerging Patterns: Discovering Trends and Differences. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining (SIGKDD).Google Scholar
Dorneich, A., Natarajan, R., Pednault, E., and Tipu, F. 2006. Embedded Predictive Modeling in a Parallel Relational Database. Pages 569–574 of: SAC '06: Proceedings of the 2006 ACM Symposium on Applied Computing. New York: ACM.CrossRefGoogle Scholar
Gehrke, J., Ganti, V., Ramakrishnan, R., and Loh, W.-Y. 1999 (June). BOAT – Optimistic Decision Tree Construction. Pages 169–180 of: ACM SIGMOD International Conference on Management of Data.CrossRefGoogle Scholar
Han, J., Dong, G., and Yin, Y. 1999. Efficient Mining of Partial Periodic Patterns in Time Series Database. In: Proceedings of the International Conference on Data Engineering (ICDE).Google Scholar
Jin, R., and Agrawal, G. 2003 (May). Communication and Memory Efficient Parallel Decision Tree Construction. In: The 3rd SIAM International Conference on Data Mining.Google Scholar
Joshi, M. V., Karypis, G., and Kumar, V. 1998 (March). ScalParC: A New Scalable and Efficient Parallel Classification Algorithm for Mining Large Datasets. Pages 573–579 of: The 12th International Parallel Processing Symposium.Google Scholar
Mannila, H., Toivonen, H., and Verkamo, A. 1997. Discovery of Frequent Episodes in Event Sequences. Data Mining and Knowledge Discovery.Google Scholar
Mehta, M., Agrawal, R., and Rissanen, J. 1996. SLIQ: A Fast Scalable Classifier for Data Mining. Pages 18–32 of: The 5th International Conference on Extending Database Technology.Google Scholar
Natarajan, R., and Pednault, E. 2001. Using Simulated Pseudo Data to Speed Up Statistical Predictive Modeling from Massive Data Sets. In: First SIAM International Conference on Data Mining.Google Scholar
Natarajan, R., and Pednault, E. 2002. Segmented Regression Estimators for Massive Data Sets. In: Second SIAM International Conference on Data Mining.Google Scholar
Pednault, E. P. D. 2006. Transform Regression and the Kolmogorov Superposition Theorem. In: Proceedings of the Sixth SIAM International Conference on Data Mining.Google Scholar
Schölkopf, B., and Smola, A. J. 2002. Leaning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, MA: MIT Press.Google Scholar
Shafer, J., Agrawal, R., and Mehta, M. 1996. SPRINT: A Scalable Parallel Classifier for DataMining. Pages 544–555 of: The 22nd International Conference on Very Large Databases.Google Scholar
Silverstein, C., Brin, S., Motwani, R., and Ullman, J. 1998. Scalable Techniques for Mining Causal Structures. In: Proceedings of the International Conference on Very Large Data Bases (VLDB).Google Scholar
Sonnenburg, S., Franc, V., Yom-Tov, E., and Sebag, M. 2008. Pascal Large Scale Learning Challenge.
Zaki, M., Parthasarathy, S., Ogihara, M., and Li, W. 1995. New Algorithms for Fast Discovery of Association Rules. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining (SIGKDD).Google Scholar
Zhang, R., and Rudnicky, A. I. 2002. A Large Scale Clustering Scheme for Kernel k-Means. Page 40289 of: Proceedings of the 16th International Conference on Pattern Recognition (ICPR'02) Volume 4. Washington, DC: IEEE Computer Society.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×