Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-wq2xx Total loading time: 0 Render date: 2024-04-24T16:46:18.392Z Has data issue: false hasContentIssue false

14 - Parallel Online Learning

from Part Three - Alternative Learning Settings

Published online by Cambridge University Press:  05 February 2012

Daniel Hsu
Affiliation:
Rutgers University
Nikos Karampatziakis
Affiliation:
Cornell University
John Langford
Affiliation:
Yahoo! Research, New York, NY, USA
Alex J. Smola
Affiliation:
Yahoo! Research, Santa Clara, NY, USA
Ron Bekkerman
Affiliation:
LinkedIn Corporation, Mountain View, California
Mikhail Bilenko
Affiliation:
Microsoft Research, Redmond, Washington
John Langford
Affiliation:
Yahoo! Research, New York
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Scaling up Machine Learning
Parallel and Distributed Approaches
, pp. 283 - 306
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Amari, S. 1967. A Theory of Adaptive Pattern Classifiers. IEEE Transactions on Electronic Computers, 16, 299–307.CrossRefGoogle Scholar
Blum, A., Kalai, A., and Langford, J. 1999. Beating the Hold-out: Bounds for k-Fold and Progressive Cross-Validation. Pages 203–208 of: Proceedings of 12th Annual Conference on Computational Learning Theory. New York: ACM.Google Scholar
Bottou, Léon. 2008. Stochastic Gradient SVMs. http://leon.bottou.org/projects/sgd.
Bryson, A. E., and Ho, Y.-C. 1969. Applied Optimal Control: Optimization, Estimation, and Control. Blairsdell.Google Scholar
Chu, C., Kim, S. K., Lin, Y., Yu, Y., Bradski, G., Ng, A. Y., and Olukotun, K. 2007. Map-Reduce for Machine Learning on Multicore. In: Neural Information Processing Systems (NIPS) 19.Google Scholar
Dekel, O., Gilad-Bachrach, R., Shamir, O., and Xiao, L. 2010. Optimal Distributed Online Prediction using Mini-Batches. In: Learning on Cores, Clusters, and Clouds Workshop.Google Scholar
Gilbert, J. C., and Nocedal, J. 1992. Global Convergence Properties of Conjugate Gradient Methods for Optimization. SIAM Journal on Optimization, 2(1), 21–42.CrossRefGoogle Scholar
Haussler, D., Kivinen, J., and Warmuth, M. K. 1995. Tight Worst-Case Loss Bounds for Predicting with Expert Advice. Pages 69–83 of: Computational Learning Theory: EuroColt '95. New York: Springer.CrossRefGoogle Scholar
Langford, J., Li, L., and Strehl, A. 2007. Vowpal Wabbit Online Learning Project. http://hunch.net/?p=309.
Langford, J., Strehl, A., and Wortman, J. 2008. Exploration Scavenging. In: Proceedings of International Conference on Machine Learning (ICML).CrossRefGoogle Scholar
Langford, J., Smola, A. J., and Zinkevich, M. 2009. Slow Learners Fast. arXiv:0911.0491.
Lewis, D. D., Yang, Y., Rose, T. G., and Li, F. 2004. RCV1: A New Benchmark Collection for Text Categorization Research. Journal of Machine Learning Research, 5, 361–397.Google Scholar
Mann, G., McDonald, R., Mohri, M., Silberman, N., and Walker, D. 2009. Efficient Large-Scale Distributed Training of Conditional Maximum EntropyModels. In: Neural Information Processing Systems (NIPS).Google Scholar
McDonald, R., Hall, K., and Mann, G. 2010. Distributed Training Strategies for the Structured Perceptron. In: North American Association for Computational Linguistics (NAACL).Google Scholar
Rahimi, A., and Recht, B. 2008. Random Features for Large-Scale Kernel Machines. In: Platt, J. C., Koller, D., Singer, Y., and Roweis, S. (eds), Advances in Neural Information Processing Systems 20. Cambridge, MA: MIT Press.Google Scholar
Rosenblatt, F. 1958. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65(6), 386–408.CrossRefGoogle Scholar
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. 1986. Learning Internal Representations by Error Propagation. Pages 318–362 of: Parallel Distributed Processing. Cambridge, MA: MIT Press.Google Scholar
Shalev-Shwartz, S., Singer, Y., and Srebro, N. 2007. Pegasos: Primal Estimated sub-GrAdient Solver for SVM. In: Proceedings of International Conference on Machine Learning.Google Scholar
Shi, Q., Petterson, J., Dror, G., Langford, J., Smola, A., Strehl, A., and Vishwanathan, S. V. N. 2009. Hash Kernels. Society for Artificial Intelligence and Statistics.Google Scholar
Teo, C. H., Vishwanthan, S. V. N., Smola, A. J., and Le, Q. V. 2009. BundleMethods for Regularized Risk Minimization. Journal of Machine Learning Research, 11, 311–365.Google Scholar
Weinberger, K., Dasgupta, A., Attenberg, J., Langford, J., and Smola, A. J. 2009. Feature Hashing for Large Scale Multitask Learning. In: Bottou, L., and Littman, M. (eds), International Conference on Machine Learning.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×