Skip to main content Accessibility help
×
Hostname: page-component-77c89778f8-gvh9x Total loading time: 0 Render date: 2024-07-20T09:23:03.611Z Has data issue: false hasContentIssue false

2 - MapReduce and the New Software Stack

Published online by Cambridge University Press:  05 December 2014

Jure Leskovec
Affiliation:
Stanford University, California
Anand Rajaraman
Affiliation:
Milliways Laboratories, California
Jeffrey David Ullman
Affiliation:
Stanford University, California
Get access

Summary

Modern data-mining applications, often called “big-data” analysis, require us to manage immense amounts of data quickly. In many of these applications, the data is extremely regular, and there is ample opportunity to exploit parallelism. Important examples are:

  1. (1) The ranking of Web pages by importance, which involves an iterated matrix-vector multiplication where the dimension is many billions.

  2. (2) Searches in “friends” networks at social-networking sites, which involve graphs with hundreds of millions of nodes and many billions of edges.

To deal with applications such as these, a new software stack has evolved. These programming systems are designed to get their parallelism not from a “super-computer,” but from “computing clusters” – large collections of commodity hardware, including conventional processors (“compute nodes”) connected by Ethernet cables or inexpensive switches. The software stack begins with a new form of file system, called a “distributed file system,” which features much larger units than the disk blocks in a conventional operating system. Distributed file systems also provide replication of data or redundancy to protect against the frequent media failures that occur when data is distributed over thousands of low-cost compute nodes.

On top of these file systems, many different higher-level programming systems have been developed. Central to the new software stack is a programming system called MapReduce. Implementations of MapReduce enable many of the most common calculations on large-scale data to be performed on computing clusters efficiently and in a way that is tolerant of hardware failures during the computation.

MapReduce systems are evolving and extending rapidly. Today, it is common for MapReduce programs to be created from still higher-level programming systems, often an implementation of SQL. Further, MapReduce turns out to be a useful, but simple, case of more general and powerful ideas. We include in this chapter a discussion of generalizations of MapReduce, first to systems that support acyclic workflows and then to systems that implement recursive algorithms.

Our last topic for this chapter is the design of good MapReduce algorithms, a subject that often differs significantly from the matter of designing good parallel algorithms to be run on a supercomputer.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2014

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] F.N., Afrati, V., Borkar, M., Carey, A., Polyzotis, and J.D., Ullman, “Cluster computing, recursion, and Datalog,” to appear inProc. Datalog 2.0 Workshop, Elsevier, 2011.Google Scholar
[2] F.N., Afrati, A., Das Sarma, S., Salihoglu, and J.D., Ullman, “Upper and lower bounds on the cost of a MapReduce computation.” to appear inProc. Intl. Conf. on Very Large Databases, 2013. Also available as CoRR, abs/1206.4377.Google Scholar
[3] F.N., Afrati and J.D., Ullman, “Optimizing joins in a MapReduce envi-ronment,” Proc. Thirteenth Intl. Conf. on Extending Database Technology, 2010.Google Scholar
[4] V., Borkar and M., Carey, “Hyrax: demonstrating a new foundation for data-parallel computation,” http://asterix.ics.uci.edu/pub/hyraxdemo.pdfUniv. of California, Irvine, 2010.
[5] Y., Bu, B., Howe, M., Balazinska, and M., Ernst, “HaLoop: efficient iterative data processing on large clusters,” Proc. Intl. Conf. on Very Large Databases, 2010.Google Scholar
[6] F., Chang, J., Dean, S., Ghemawat, W.C., Hsieh, D.A., Wallach, M., Burrows, T., Chandra, A., Fikes, and R.E., Gruber, “Bigtable: a distributed storage system for structured data,” ACM Transactions on Computer Systems 26:2, pp. 1–26, 2008.Google Scholar
[7] B.F., Cooper, R., Ramakrishnan, U., Srivastava, A., Silberstein, P., Bohan-non, H.-A., Jacobsen, N., Puz, D., Weaver, and R., Yerneni, “Pnuts: Yahoo!'s hosted data serving platform,” PVLDB 1:2, pp. 1277–1288, 2008.Google Scholar
[8] J., Dean and S., Ghemawat, “Mapreduce: simplified data processing on large clusters,” Comm. ACM 51:1, pp. 107–113, 2008.Google Scholar
[9] D.J., DeWitt, E., Paulson, E., Robinson, J.F., Naughton, J., Royalty, S., Shankar, and A., Krioukov, “Clustera: an integrated computation and data management system,” PVLDB 1:1, pp. 28–41, 2008.Google Scholar
[10] S., Ghemawat, H., Gobioff, and S.-T., Leung, “The Google file system,” 19th ACM Symposium on Operating Systems Principles, 2003.Google Scholar
[11] hadoop.apache.org, Apache Foundation.
[12] hadoop.apache.org/hive, Apache Foundation.
[13] M., Isard, M., Budiu, Y., Yu, A., Birrell, and D., Fetterly. “Dryad: distributed data-parallel programs from sequential building blocks,” Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems, pp. 59–72, ACM, 2007.Google Scholar
[14] G., Malewicz, M.N., Austern, A.J.C., Sik, J.C., Denhert, H., Horn, N., Leiser, and G., Czajkowski, “Pregel: a system for large-scale graph processing,” Proc. ACM SIGMOD Conference, 2010.Google Scholar
[15] C., Olston, B., Reed, U., Srivastava, R., Kumar, and A., Tomkins, “Pig latin: a not-so-foreign language for data processing,” Proc. ACM SIGMOD Conference, pp. 1099–1110, 2008.Google Scholar
[16] J.D., Ullman and J., Widom, A First Course in Database Systems, Third Edition, Prentice-Hall, Upper Saddle River, NJ, 2008.Google Scholar
[17] Y., Yu, M., Isard, D., Fetterly, M., Budiu, I., Erlingsson, P.K., Gunda, and J., Currey, “DryadLINQ: a system for general-purpose distributed data-parallel computing using a high-level language,” OSDI, pp. 1–14, USENIX Association, 2008.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×