Wednesday, March 27, 2013
Tuesday, January 22, 2013
Monday, November 12, 2012
- 167 students
- 14 recitations sections (12 students each)
- 14 faculty doing recitations
- 8 faculty doing lectures
Monday, June 11, 2012
In a recent blog post, I noted that 52% of all copies of the CS:APP that have been sold were in Chinese. Prof. Yili Gong of Wuhan University did the translations for both the first and second editions of the book. Prof. Gong has also been a valuable contributor to our errata.
I recently came back from a trip to China, where I gave lectures about CS:APP at both Peking University and Tsinghua University, both of which use the book in their courses. Looking at our adoptions list, there are only 8 universities in China that we know of using CS:APP as a textbook. Apparently, the vast majority of copies sold in China are being used by individuals for self study.
Wednesday, May 30, 2012
- English. Including versions published in India (1st edition only) and China (1st and 2nd edition) for readers in those two countries
- Chinese (1st and 2nd edition)
- Korean (2nd edition)
- Russian (1st edition)
- Macedonian (1st edition)
One thing that's clear is that we're very popular in China: fully 52% of the total has been in Chinese, and another 15% has been the English version for the Chinese market.
Thursday, May 17, 2012
The Bomb Lab servers assign diffusions and explosions to Bomb IDs, rather than users, and Bomb IDs start over from scratch each term. Thus, if a student who took the class last semester ran their old bomb while the lab was as underway this semester, then the explosions and diffusions from the old bomb would be incorrectly assigned to the current bomb with the same Bomb ID.
To address this, we've added a per-semester identifier, called $LABID, to the Bomb Lab config file. Instructors can set this variable each term (for example $LABID="f12") to uniquely identify each offering. Any results from previous bombs with different $LABIDs are ignored.
Thanks for Prof. Godmar Bak, Virginia Tech, for pointing this out.
Monday, April 23, 2012
(1) Some recent gcc builds automatically enable the -fstack-protector option. We now explicitly disable this by compiling the buffer bomb with -fno-stack-protector.
(2) In order to avoid infinite loops during autograding, the previous update from February 2012 introduced a timeout feature that was always enabled. However, this was a problem for students who were debugging their bombs in gdb. We now enable timeouts only during autograding.
Thanks to Prof. James Riely, DePaul University for pointing these out to us.
Tuesday, February 21, 2012
Saturday, February 4, 2012
Shortly after arriving, we visited Strathmore University, where I gave a presentation about CS:APP. There were students and faculty members from several area universities. The talk went very well, with interesting and insightful questions from the audience. Perhaps the most striking response occurred when I showed our map of schools using CS:APP as a text book as of Jan. 1, 2012:
Sunday, January 22, 2012
map displaying all of the schools we know of that are using CS:APP as a textbook.
Check out http://csapp.cs.cmu.edu/public/adoption-map.html.
Tuesday, October 18, 2011
It's hard to imagine a world without C, but prior to its development in the early 1970s, essentially all system-level program was done in assembly code. That had the undesirable features that 1) there was no portability from one machine to another and 2) writing assembly code is tedious and error-prone. Although higher level languages, such as Cobol, Fortran, and Algol, were well established at the time, they sought to hide away many of the low-level details that a system-level programmer must use. Here are some examples of features that were not available in these languages:
- Bitwise logical operators
- Integer variables of different sizes
- Byte-level memory access
- The ability to selectively bypass the type system via unions and casts
C had predecessors: The B language developed by Ritchie's colleague Ken Thompson, which in turn arose from BCPL, developed by Martin Richards of Cambridge University in 1966. These predecessors had many limitations, however, and so C really is the first, and arguably still the best, language for implementing system software.
C was, to my knowledge, the first high-level language that embraced the byte (although referred to as char) as a first-class data element---something that could be used as a data value, read from or written to memory, and replicated as an array. Having the ability to operate on byte arrays enables C programmers to create arbitrary system-level data structures, such as page tables, I/O buffers, and network packets. Such a capability was previously only available to assembly language programmers.
The early C compilers did not produce very good code. To get C programs to run fast, programmers had to do a lot of low-level tweaking of the code, such as declaring local variables to be of type register, converting array references to pointers, and explicitly performing such optimizations as code motion, common subexpression elimination, and strength reduction. This had the property of making the programs nearly illegible. Serious code tuners often resorted to assembly language. Fortunately, modern compilers make most of this low-level hacking unnecessary.
Consider how much code is still being written in C: all of Linux, all GNU software, and much more. If we extend to its younger brother C++, we get the vast majority of computer systems worldwide.
Dennis, thanks for the great language you created.
Friday, September 16, 2011
The publisher has a translation of our book, as well as books on wine, smoking cessation, and the chemistry of explosives.
Wednesday, September 14, 2011
Friday, August 5, 2011
Interestingly, in their textbook Digitaltechnik—Eine praxisnahe Einführung (Digital Technology, a Practical Introduction), Armin Biere and Daniel Kroening present the design of a processor that executes a subset of the Y86 instructions using the actual IA32 encodings of the instructions.
Apparently, we weren’t the only ones to think of the name “Y86” as a variant of “x86.” Randall Hyde introduces a stripped down 16-bit instruction set, which he names “Y86” in his book Write Great Code, published in 2004, several years after the first edition of CS:APP came out.
The domain names “y86.com” and “y86.net” are already taken, but it looks like they’re been occupied by a cybersquatter named Richard Strickland since 2003. Perhaps he’s just waiting for us to buy him out!
In 1982, I heard a presentation about the principles of Reduced Instruction Set Computers (RISC) from David Patterson. It was a real eye-opener! He pointed out that all that business about closing the semantic gap was implemented using microcode, which simply added an unnecessary layer of interpretation between the software and the hardware. Advanced compilers could do a much better job of taking advantage of special cases than could a general-purpose microcode interpreter. I returned from that talk and told the students in my computer architecture course that I felt like I’d be teaching the wrong material.
When I started teaching computer architecture at CMU, to both our PhD students and to CS undergraduates, I fully embraced the RISC philosophy, partly because of the then-new textbook, Computer Architecture: A Quantitative Approach, by Patterson & Hennessy. We made use of a set of MIPS-based machines available to the students, and later a set of Alphas, provided by Digital Equipment Corporation. Being able to compile and execute C programs on actual machines proved to be an important aspect of the courses. Needless to say, I was a true believer in RISC, and I would scoff at x86 as a big bag of bumps and warts with all of its addressing models, weirdly named registers, and truly icky floating-point architecture.
As mentioned in our earlier post, the initial offering of 15-213, our Introduction to Computer Systems course, made use of our collection of Alpha machines. But, we could see that we were on a dead-end path with these machines. In spite of their clean design and initial high performance, Alphas did not fare well in the marketplace. The steady progress by Intel, first with the Pentium and then with the PentiumPro, slowly took over the market for desktop machines, including for high-end engineering workstations. We also were thinking at that point about writing a textbook, to encourage others to teach about computer systems from a programmer’s perspective, and so we wanted to find a platform that would be widely available. We considered both SUN Sparc and IBM/Motorola PowerPC, but these had their own funky features (register windows, branch counters), and also lacked the universality of x86.
As an experiment, I tried compiling some of the C programs we had used to demonstrate the constructs found in machine-level programs on a Linux Pentium machine. Much to my surprise, I discovered that the assembly code generated by GCC wasn’t so bad after all. All of those different addressing models didn’t really matter---Linux uses “flat mode” addressing, which avoids all the weird segmentation stuff. The oddball instructions for decimal arithmetic and string comparison didn’t show up in normal programs. Floating-point code was pretty ugly, but we could simply avoid that. Moreover, it didn’t take a particularly magical crystal ball to see that x86 was going to be the dominant instruction set for the foreseeable future.
So, for the second offering of 15-213 in Fall, 1999, Dave O’Hallaron and I decided to make a break from the RISC philosophy and go with x86 (or more properly IA32 for “Intel Architecture 32 bits”). This turned out to be one of the best decisions we ever made. Students could use any of the Linux-based workstations being deployed on campus. By installing the Cygwin tools, they could even do much of the work for class on their Windows laptops. The feeling of working with real code running on real machines was very compelling. When we then went to write Computer Systems: A Programmer’s Perspective, we were certain that x86 was the way to go. Now that Apple Macintosh has transitioned to Intel processors, there are really 3 viable platforms for presenting x86.
One thing we learned is that every machine has awkward features that students must learn if they are going to look at real machine programs. Here are some examples:
- In MIPS, you cannot load a 32-bit constant value with a single instruction. Instead, you load two 16-bit constants, first using the lui instruction to load the upper 16 bits, and then an addi instruction to add a constant to the lower 16 bits. With a byte-oriented instruction set, such as x86, constants of any length can be encoded within a single instruction.
- C code compiled to MIPS uses the addu (unsigned add) instruction for adding signed numbers, since the add (signed add) instruction will trap on overflow.
- With the earlier Alphas, there was no instruction to load or store a single byte. Loading required a truly baroque pair of instructions: ldq_u (load quadword unaligned) and extbl (extract byte low), followed by two shifts to do a sign extension. This is all done in x86 with a movb (move byte) instruction, followed by a movsbl (move signed byte to long) to do the sign extension.
My point here isn’t that x86 is superior to a RISC instruction set, but rather that all machines have their bumps and warts, and that’s part of what students need to learn.
In the end, I think the choice of teaching x86 vs. a cleaner language to a computer scientist or computer engineer is a bit like teaching English, rather than Spanish, to someone from China. Spanish is a much cleaner language, with predictable rules for how to pronounce words, far fewer irregularities, and a smaller vocabulary. It’s even useful for communicating with many other people, just as learning MIPS (or better yet, ARM) would be for programming embedded processors. But, as English is the main language for commerce and culture in this world, so x86 is the main language for machines that our students are likely to actually program. Like Chinese parents who send their children to English-language school, I’m content teaching my students x86!
Tuesday, August 2, 2011
What really made a difference in 15-213 was our ability to present interesting and engaging lab exercises, all done on computers. We had a set of Alpha 21164 processors (Digital Equipment Corporation---may they rest in peace---was always a great friend to CMU) that the students accessed over the network. Some of them were connected together via a separate Ethernet cable so that we could allow students to snoop packets in promiscuous mode.
Here are some of labs we offered that fall:
- ·Data Lab. A set of “puzzles” that require implementing standard logical and arithmetic operations with a restricted set of C expressions. For example, compute the absolute value of a number without using any conditionals.
- ·Bomb Lab. This was the invention of our teaching assistant, Chris Colohan. It involved reverse engineering an executable program, given in binary form, and devising a set of strings that would “defuse” six different phases. This lab continues to be the centerpiece of the course. It gets students to learn about machine-level programming, the use of tools such as GDB, and the general strategies of reverse engineering.
- Malloc Lab. Students implement their own malloc packages. This lab has also stood the test of time. The challenge for most students is that all the casting and pointer hacking involved means that many bugs are not caught by the compiler, and tracking down bugs can be very difficult.
- Performance Lab. Students write programs and both analyze and optimize their cache performance. For this lab, we used matrix transpose as the problem to be solved
- Network Lab. The students reverse-engineered a simple network protocol by sniffing packets. It was fun to finally figure out the packet format and suddenly have messages (from “Dr. Evil”) coming through.
Instructors for our upper-level systems courses have come to appreciate the preparation that 15-213 provides. Dave Eckhardt, one of our OS instructors, says that he can reliably predict how well a student will do in their course based on how they did with the Malloc lab. 15-213 has become a prerequisite for courses in operating systems, networking, compilers, computer graphics (they want students to understand floating point), embedded systems, and computer architecture. The course is now required of all CS and ECE majors.
One sign of our success is the course ratings. Here’s my average scores for “instructor effectiveness” on a five-point scale:
I have now taught the course eleven times, and I still really enjoy it, as do the students. Dave has also taught the course many times, sometimes with me and sometimes with other instructors. He received the CMU School of Computer Science’s Herbert A. Simon Award for Teaching Excellence in Computer Science in 2004, based largely on our students’ appreciation of his efforts in teaching 15-213.
Meanwhile, at one of our faculty lunches, Garth Gibson described the challenges he had in his operating systems course with the students’ lack of understanding of how programs are executed. He would say “To do a context switch, the OS needs to push the values of the registers onto the stack,” to which the students responded “Registers? Stack? What are those?” I told Garth that the students learned all that in my architecture course, but we realized that my course was not a prerequisite for operating systems, and it wouldn’t really work to make it so, in terms of student schedules.
So, Dave O’Hallaron (who had cotaught the computer architecture course with me) and I started thinking about a new course that would
- Provide a programer’s perspective, rather than a computer architect’s perspective on computers systems, and
- would come early enough in the curriculum that it could then feed into our systems courses, including OS.
At CMU, I used to teach a junior-level computer architecture course that was required of all CS majors. I really liked showing how to construct a 5-stage pipeline to implement a processor that could execute MIPS code, as well as nuances of cache design, virtual memory, and data storage. Unfortunately, the students did not share my enthusiasm. They were much more oriented toward software, and, in their minds, this course formed a not-very-useful, dead-end branch in our curriculum.
We are creating this blog as a way to further build and support the CS:APP community. In this blog, we will post interesting stories, updates on the book contents and extra material, and our experiences in using this book in courses at CMU.
We welcome your suggestions, comments, and feedback!