Tuesday, May 2, 2017

A Gallery of Memory Mountains

Through all 3 editions, CS:APP has used memory mountains to illustrate how the cache hierarchy affects memory system performance.  Here we compare the memory mountains of different processors over the years, revealing evolutionary changes in memory system design.

Here's the mountain from the First Edition, based on a 1999 Intel Pentium III Xeon:



The memory mountain shows the throughput achieved by a program repeatedly reading elements from an array of N elements, using a stride of S (i.e., accessing elements 0, S, 2S, 3S, ..., N-1).  The performance, measured in megabytes (MB) per second, varies according to how many of the elements are found in one of the processor's caches.  For small values of N, the elements can be held in the L1 cache, achieving maximum read throughput.  For larger values of N, the elements can be held in the L2 cache, and the L1 cache may be helpful for exploiting spatial locality for smaller values of S.  For large values of N, the elements will reside in main memory, but both the L1 and L2 cache can improve performance when S enables some degree of spatial locality.

By way of reference, the use of the memory mountain for visualizing memory performance was  devised by Thomas Stricker while he was a PhD student at CMU in the 1990s working for Prof. Thomas Gross.  Both of them now live in Switzerland, with Thomas Gross on the faculty at ETH.

Jumping forward 9 years, the above figure illustrates the performance for a 2009 iMac that I still use as a home computer.  It has the same qualitative characteristics of the Pentium III, with two levels of cache.  Overall, it has over 10X higher throughput.  You'll also notice how smooth the curves are.  That's partly because I rewrote the code last year to give more reliable timing measurements.

 For the second edition of CS:APP, we used a 2009 Intel Core i7, using the Nehalem microarchitecture.  The data here are noisy—they predate my improved timing code.  There are several important features in this mountain not found in the earlier ones.  First, there are 3 levels of cache.  There is also a striking change along the back edge.  The performance for a stride of 1 stays high for sizes up to around 16MB, where the data can still be held in the L2 cache.  It reflects the ability of the memory system to initiate prefetching, where it observes the memory access pattern and predicts which memory blocks will be read in the future.  It copies these blocks from L3 up to L2 and L1.  Then when the processor reads these memory locations, the data will already be in the L1 cache.  The overall performance is well short of that measured for the contemporary (2008) Core Duo shown above.  This could partly be due to differences in the timing code–the newer version uses 4-way parallelism when reading the data, whereas the old code was purely sequential.


For the third edition of CS:APP, we used a 2013 Intel Core i5, using the Haswell microarchitecture.  The above figure shows measurements for this machine using the improved timing code.  Overall, though, the memory system is similar to the Nehalem processor from CS:APP2e.  It has 3 levels of cache and uses prefetching. Note how high the overall throughputs are.

 The final mountain uses measurements from a Raspberry Pi 3.  The Pi uses a processor included as part of a Broadcomm "system on a chip" designed for use in smartphones.  The processor is based on the ARM Cortex-A53 microarchitecture.  Performance-wise, it is much slower than a contemporary desktop processor, but it is very respectable for an embedded processor.  There's a clear 2-level cache structure.  It also appears that some amount of prefetching might occur with both cache and main memory accesses.

Over the nearly 20-year time span represented by these machines, we can see that memory systems have undergone evolutionary changes.  More levels of cache have been added, and caches have become larger.  Throughputs have improved by over an order of magnitude.  Prefetching helps when access patterns are predictable.  It's interesting to see how the visualization provided by memory mountains enables us to see these qualitative and quantitative changes.








2 comments:

  1. for a multi-core processor,the throughout rate could be improved correspondingly?

    ReplyDelete
    Replies
    1. It would be an interesting test to run multiple threads, all running the same access patterns. What you'd find, I think, is that the L1 and L2 performance remains the same, but the L3 and main memory performance drops in proportion to the number of threads, since these resources are shared across cores.

      Delete