Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-cjp7w Total loading time: 0 Render date: 2024-06-29T01:04:25.124Z Has data issue: false hasContentIssue false

15 - Memory Hierarchy and Caching

Published online by Cambridge University Press:  27 April 2019

Sandeep Sen
Affiliation:
Indian Institute of Technology, Delhi
Amit Kumar
Affiliation:
Indian Institute of Technology, Delhi
Get access

Summary

Models of Memory Hierarchy

Designing memory architecture is an important component of computer organization that tries to achieve a balance between computational speed and memory speed, viz., the time to fetch operands from memory. Computational speeds are much faster since the processing happens within the chip; whereas, a memory access could involve off chip memory units. To bridge this disparity, the modern computer has several layers of memory, called cache memory that provides faster access to the operands. Because of technological and cost limitations, cache memories offer a range of speed–cost tradeoffs. For example, the L1 cache, the fastest cache level is usually also of the smallest size. The L2 cache is larger, say by a factor of ten but also considerably slower. The secondary memory which is the largest in terms of size, e.g., the disk could be 10,000 times slower than the L1 cache. For any large size application, most of the data resides on disk and is transferred to the faster levels of cache when required.

This movement of data is usually beyond the control of the normal programmer and managed by the operating system and hardware. By using empirical principles called temporal and spatial locality of memory access, several replacement policies are used to maximize the chances of keeping the operands in the faster cache memory levels. However, it must be obvious that there will be occasions when the required operand is not present in L1; one has to reach out to L2 and beyond and pay the penalty of higher access cost. In other words, memory access cost is not uniform as discussed in the beginning of this book but for simplicity of the analysis, we had pretended that it remains same.

In this chapter, we will do away with this assumption; however, for simpler exposition, we will deal with only two levels of memory – slowand fast where the slower memory has infinite size while the faster one is limited, say, of size M and significantly faster. Consequently, we can pretend that the faster memory has zero (negligible) access cost and the slower memory has access cost 1.

Type
Chapter
Information
Design and Analysis of Algorithms
A Contemporary Perspective
, pp. 308 - 322
Publisher: Cambridge University Press
Print publication year: 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Memory Hierarchy and Caching
  • Sandeep Sen, Indian Institute of Technology, Delhi, Amit Kumar, Indian Institute of Technology, Delhi
  • Book: Design and Analysis of Algorithms
  • Online publication: 27 April 2019
  • Chapter DOI: https://doi.org/10.1017/9781108654937.016
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Memory Hierarchy and Caching
  • Sandeep Sen, Indian Institute of Technology, Delhi, Amit Kumar, Indian Institute of Technology, Delhi
  • Book: Design and Analysis of Algorithms
  • Online publication: 27 April 2019
  • Chapter DOI: https://doi.org/10.1017/9781108654937.016
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Memory Hierarchy and Caching
  • Sandeep Sen, Indian Institute of Technology, Delhi, Amit Kumar, Indian Institute of Technology, Delhi
  • Book: Design and Analysis of Algorithms
  • Online publication: 27 April 2019
  • Chapter DOI: https://doi.org/10.1017/9781108654937.016
Available formats
×