Monday, October 8, 2018
Back in 2005, we did a similar verification of the Y86 processors appearing in the first edition of CS:APP. Our recent work provides an update of both the processor designs and the verification tool.
What did we actually do? The diagram below shows the process by which we constructed a verification model:
This model allows us to compare the sequential implementation SEQ to a pipelined implementation PIPE. That is, we can show that for any possible program and any starting state, the two implementations will yield the same results, in terms of their updates to the register file, the condition codes, the program counter, and the memory. In other words, all of the mechanisms in PIPE to deal with data and control hazards—data forwarding, stalling, and exception handling—operate correctly.
As the figure illustrates, the HCL control logic descriptions were translated directly into UCLID5 code using a program hcl2U. We have now developed HCL translators to map the control logic into 1) C code for use in a simulator, 2) Verilog code for use by a logic synthesis program, and 3) both the original and most recent versions of UCLID.
UCLID5 makes use of the Z3 SMT solver from Microsoft Research to search for inconsistencies between the two processors. Z3 is a complete solver, and so it can determine that no such inconsistency exists, and therefore the design is correct.
We considered seven different variants of the pipeline: the one described in the book, as well as ones that implement various extensions and variations given as homework assignments. All seven were proved correct.
There is really no surprise here, but it's nice to be sure that there's not some subtle bug lingering in our designs.
A complete report on this effort is available as CMU Technical Report CMU-CS-18-122.
Wednesday, June 6, 2018
Access for 180 days (enough for taking a course) is available for $33.99. Perpetual access costs $99.99.
Our understanding is that access is provided via an Internet-based portal, rather than as a standalone electronic document.
Wednesday, July 5, 2017
Friday, May 5, 2017
Tuesday, May 2, 2017
Here's the memory mountain for a recent Intel processor:
other mountains, you'll see this phenomenon for other cases as well, specifically Intel processors that employ prefetching.
Let's investigate this phenomenon more closely. Quite honestly, though I have no explanation for why it occurs.
In the above processor, a cache block contains eight 8-byte long integers. For a stride of S, considering only spatial locality, we would expect a miss rate of around S/8 for strides up to 8. For example, a stride of 1 would yield one miss followed by 7 hits, while a stride of 2 would yield one miss followed by 3 hits. For strides of 8 or more, the miss rate would be 100%. If read incurring a cache miss incurs a delay M, and one incurring a hit incurs a delay H, then we would expect the average time per access to be M*S/8 + H*(1-S/8). The throughput should be the reciprocal of the average delay.
For the larger sizes, where data resides in the L3 cache, this predictive model holds fairly well. Here are the data for a size of 4 megabytes:
I have experimented with the measurement code to see if the bump is some artifact of how we run the tests, but I believe this is not the case. I believe there is some feature of the memory system that causes this phenomenon.
I would welcome any ideas on what might cause memory mountains to have this bump.
Here's the mountain from the First Edition, based on a 1999 Intel Pentium III Xeon:
The memory mountain shows the throughput achieved by a program repeatedly reading elements from an array of N elements, using a stride of S (i.e., accessing elements 0, S, 2S, 3S, ..., N-1). The performance, measured in megabytes (MB) per second, varies according to how many of the elements are found in one of the processor's caches. For small values of N, the elements can be held in the L1 cache, achieving maximum read throughput. For larger values of N, the elements can be held in the L2 cache, and the L1 cache may be helpful for exploiting spatial locality for smaller values of S. For large values of N, the elements will reside in main memory, but both the L1 and L2 cache can improve performance when S enables some degree of spatial locality.
By way of reference, the use of the memory mountain for visualizing memory performance was devised by Thomas Stricker while he was a PhD student at CMU in the 1990s working for Prof. Thomas Gross. Both of them now live in Switzerland, with Thomas Gross on the faculty at ETH.
For the third edition of CS:APP, we used a 2013 Intel Core i5, using the Haswell microarchitecture. The above figure shows measurements for this machine using the improved timing code. Overall, though, the memory system is similar to the Nehalem processor from CS:APP2e. It has 3 levels of cache and uses prefetching. Note how high the overall throughputs are.
Over the nearly 20-year time span represented by these machines, we can see that memory systems have undergone evolutionary changes. More levels of cache have been added, and caches have become larger. Throughputs have improved by over an order of magnitude. Prefetching helps when access patterns are predictable. It's interesting to see how the visualization provided by memory mountains enables us to see these qualitative and quantitative changes.
Tuesday, April 4, 2017
We have made all of the code used in this example available on the CSAPP student website, as part of the Chapter 5 materials. The code has been tested on multiple Linux systems. You can see how to code does when running on your machine.
More information is available at the publisher's website.
The translation was performed by Yili Gong and Lian He of Wuhan University. We are grateful for the conscientious job Yili has done for all three editions of the book.
Thursday, February 18, 2016
It's instructive to read the bug tracking reports at the Google post on their discovery:
as well as the bug tracking log covering the actual error:
There are several important insights to be gained from this report:
- Buffer overflows are still a key source of software vulnerabilities. Although they can be mitigated by address space randomization and other techniques, they still show up.
- This bug was introduced in with glib 2.9 in May, 2008. It was first reported in July, 2015 and fixed in February, 2016. That's a long time for a security vulnerability to lie undetected.
- It only happens when a string is given that exceeds the 2048-byte limit of the regular buffer size. The code is then allocates more memory, but it does not correctly update some of the size information properly. Apparently, this part of the code was not tested very carefully. It's an unfortunate reality of program testing that it's hard to reach all of the corner cases in a program. It seems like using code coverage tools could have been beneficial here.
Tuesday, February 9, 2016
Tuesday, January 12, 2016
Monday, January 11, 2016
The Attack Lab was first offered to CMU students in Fall 2015. It is the 64-bit successor to the 32-bit Buffer Lab and was designed for CS:APP3e. In this lab, students are given a pair of unique custom-generated x86-64 binary executables, called targets, that have buffer overflow bugs. One target is vulnerable to code injection attacks. The other is vulnerable to return-oriented programming attacks. Students are asked to modify the behavior of the targets by developing exploits based on either code injection or return-oriented programming.
Wednesday, August 26, 2015
This is a clever mnemonic devised by Geoff Kuenning of Harvey Mudd College to help him remember which registers are used for passing arguments in a Linux x86-64 system:
Thanks to Geoff for providing this helpful aid!
Monday, August 17, 2015
Tuesday, June 2, 2015
This fall, we will be teaching 15-213, the CMU course that inspired the book originally. Leading up to that, we will update the lecture slides and the labs, and we will be making that available on the instructors' site.
Wednesday, February 11, 2015
According to Amazon, the book will be available starting March 11.
Here are some chapter-by-chapter highlights:
- Ch. 2 (Data): After hearing many students saying ``It's too hard!'' we took a closer look and decided that the presentation could be improved by more clearly indicating which sections should be treated as informal discussion and which should be studied as formal derivations (and possibly skipped on first reading). Hopefully, these guideposts will help the students navigate the material, without us reducing the rigor of the presentation.
- Ch. 3 (Machine Programming): It's x86-64 all the way! The entire presentation of machine language is based on x86-64. Now that even cellphones run 64-bit processors, it seemed like it was time to make this change. Eliminating IA32 also freed up space to put floating-point machine code back in (it was present in the 1st edition and moved to the web for the 2nd edition). We generated a web aside describing IA32. Once students know x86-64, the step (back) to IA32 is fairly simple.
- Ch. 4 (Architecture): Welcome to Y86-64! We made the simple change of expanding all of the data widths to 64 bits. We also rewrote all of the machine code to use x86-64 procedure conventions.
- Ch. 5 (Optimization): We brought the machine-dependent performance optimization up to date based on more recent versions of x86 processors. The web aside on SIMD programming has been updated for AVX2. This material becomes even more relevant as industry looks to the SIMD instructions to juice up performance.
- Ch. 7 (Linking): Linking as been updated for x86-64. We expanded the discussion of position-independent code and introduce library inter positioning.
- Ch. 8 (Exceptional Control Flow): We have added a more rigorous treatment of signal handlers, including signal-safe functions.
- Ch. 11 (Network Programming): We have rewritten all of the code to use new libraries that support protocol-independent and thread-safe programming.
- Ch. 12 (Concurrent Programming): We have increased our coverage of thread-level parallelism to make programs run faster on multi-core processors.
Friday, June 13, 2014
Here's a summary of the planned changes for each chapter.
- Introduction. Minor revisions. Move the discussion of Amdahl's Law to here, since it applies across many aspects of computer systems
- Data. Do some tuning to improve the presentation, without diminishing the core content. Present fixed word size data types.
- Machine code. A complete rewrite, using x86-64 as the machine language, rather than IA32. Also update examples based on more a recent version of GCC (4.8.1). Thankfully, GCC has introduced a new opimization level, specified with the command-line option `-Og' that provides a fairly direct mapping between the C and assembly code. We will provide a web aside describing IA32.
- Architecture. Shift from Y86 to y86-64. This includes having 15 registers (omitting %r15 simplifies instruction encoding.), and all data and addresses being 64 bits. Also update all of the code examples to following the x86-64 ABI conventions.
- Optimization. All examples will be updated (they're mostly x86-64 already).
- Memory Hierarchy. Updated to reflect more recent technology.
- Linking. Rewritten for x86-64. We've also expanded the discussion of using the GOT and PLT to create position-independent code, and added a new section on the very cool technique of library interpositioning.
- Exceptional Control Flow. More rigorous treatment of signal handlers, including async-signal-safe functions, specific guidelines for writing signal handlers, and using sigsuspend to wait for handlers.
- VM. Minor revisions.
- System-Level I/O. Added a new section on files and the file hierarchy.
- Network programming. Protocol-independent and thread-safe sockets programming using the modern getaddrinfo and getnameinfo functions, replacing the obsolete and non-reentrant gethostbyname and gethostbyaddr functions.
- Concurrent programming. Enhanced coverage of performance aspects of parallel multicore programs.
Wednesday, March 27, 2013
Tuesday, January 22, 2013
Monday, November 12, 2012
- 167 students
- 14 recitations sections (12 students each)
- 14 faculty doing recitations
- 8 faculty doing lectures