CS 105

Lecture Topics and Learning Goals

This page summarizes the main topics and learning goals for the CS 105 lecture sequence. The course follows the broad arc of Computer Systems: A Programmer’s Perspective: starting from bits and data representation, moving through machine code and program execution, then outward to processes, concurrency, memory, I/O, storage, networking, and performance.

The goal is not merely to memorize isolated facts about C or x86-64 assembly. The larger aim is to build a working model of how programs actually run: how data is represented, how instructions manipulate memory and registers, how the operating system supports abstraction and sharing, and how hardware and software design choices affect correctness, security, and performance.

1. Bits, Integers, and Computer Architecture

You should be able to:

  • Describe the basic structure of a computer system and explain how the processor, memory, and operating system interact to run a program.
  • Explain why bits are the fundamental representation used throughout computer systems.
  • Convert among binary, hexadecimal, and decimal representations.
  • Predict the results of bitwise operations such as &, |, ^, ~, <<, and >>.
  • Distinguish bitwise operations from logical operations and explain when each is appropriate.

2. Integer Representations

You should be able to:

  • Explain the difference between unsigned and signed integer encodings.
  • List the sizes in bytes and bits of common integer types in C for the LP64 data model.
  • Describe two’s-complement representation and why it is used for signed integers, linking it to the properties of modular arithmetic or a counter that wraps around.
  • Compute the minimum and maximum values representable with a given word size.
  • Reason about casting between signed and unsigned integer types.
  • Predict how C expressions may behave unexpectedly because of implicit conversions.

3. Integer Operations, Extension, Truncation, and Overflow

You should be able to:

  • Continue reasoning about signed and unsigned integers in C.
  • Explain how integer values change under bit extension and truncation.
  • Predict the effects of casting among integral data types of different sizes.
  • Explain unsigned overflow as modular arithmetic.
  • Describe what signed integer overflow means in C and why it is dangerous to rely on.
  • Recognize conditions for undefined behavior in C related to signed integer overflow, including shifting into the sign bit
  • Recognize condtitions for undefined behavior in C for all integer types, including shifting by an amount greater than or equal to the width of the type or by a negative amount.

4. Floating-Point Representation

You should be able to:

  • Represent binary numbers using fractional notation and normalized scientific notation.
  • Describe the major components of IEEE-style floating-point encoding.
  • Explain the purpose of normalized values, denormalized values, infinities, and NaNs.
  • Describe, at a high level, how floating-point values are represented on x86-64 systems.
  • Recognize why floating-point arithmetic can differ from real-number arithmetic.

5. Compilation and Basic x86-64 Data Movement

You should be able to:

  • Describe the main stages of the compilation process and identify what each stage produces.
  • Explain the relationship between C code, assembly code, object files, and executable programs.
  • Interpret simple x86-64 data movement instructions such as movq.
  • Explain constraints on source and destination operands in x86-64 instructions.
  • Use common addressing modes to describe how assembly instructions access memory.

6. x86-64 Arithmetic, Logic, and Control Flow

You should be able to:

  • Interpret common arithmetic and logical x86-64 instructions such as leaq, addq, and andq.
  • Explain how condition codes are set and used.
  • Distinguish between explicit and implicit updates to processor state.
  • Trace conditional control flow in assembly code.
  • Predict how assembly instructions affect registers and memory.

7. Procedures and the Runtime Stack

You should be able to:

  • Describe the role of the runtime stack in x86-64 program execution.
  • Trace the effects of pushing to and popping from the stack.
  • Explain how procedure calls transfer control.
  • Describe how arguments and return values are passed in procedure calls.
  • Explain how local variables and temporary storage are allocated during procedure execution.
  • Reason about stack frames, call/return behavior, and changes to the program counter.

8. Arrays, Structs, and Data Layout

You should be able to:

  • Explain how one-dimensional and two-dimensional arrays are stored in memory.
  • Compute the address of an array element from its indices and element size.
  • Reason about the memory layout of C structs.
  • Explain the role of alignment and padding in struct layout.
  • Predict how changes to field order can affect the size and layout of a struct by applying the rules of alignment and padding.

9. Processes

You should be able to:

  • Describe the major abstractions provided by a process.
  • Explain what a process context is and why context switching is needed.
  • Reason about program behavior involving fork.
  • Reason about program behavior involving exec.
  • Construct process graphs for programs that create multiple processes.
  • Identify feasible and infeasible outputs from programs involving process creation.

10. Threads and Shared Memory

You should be able to:

  • Describe the memory model for threads within a process.
  • Explain how multiple logical flows can execute within a single process.
  • Identify which parts of memory are shared among threads and which are private.
  • Reason about correctness issues that arise from concurrent execution.
  • Recognize how nondeterminism can make threaded programs difficult to test and debug.

11. Synchronization and Race Conditions

You should be able to:

  • Explain critical sections and why unsafe interleavings can lead to incorrect behavior.
  • Identify simple race conditions in concurrent programs, particularly those arising from failure to follow good synchronization discipline.
  • Explain the goals of mutual exclusion and conditional waiting.
  • Describe how semaphores work and how they can be used to achieve mutual exclusion and conditional waiting.
  • Use semaphores to reason about synchronization.
  • Use mutexes and condition variables to develop correct concurrent programs.
  • List common synchronization patterns and explain how they work, including proper use of condition variables and avoiding pitfalls such as lost wakeups and spurious wakeups.
  • Apply synchronization invariants to analyze whether a concurrent program is correct.

12. Dynamic Memory Allocation

You should be able to:

  • Describe the design goals and tradeoffs of a dynamic memory allocator.
  • Describe a bump allocator and explain when it is appropriate to use.
  • Describe a free-list allocator and explain when it is appropriate to use.
  • Explain how coalescing and boundary tags can improve the performance of a free-list allocator.
  • Explain how allocated and free blocks can be represented in memory.
  • Reason about implicit free-list allocation.
  • Trace how calls to malloc and related operations affect the heap.
  • Explain the role of block headers, alignment, splitting, and coalescing.

13. Exceptional Control Flow and Signals

You should be able to:

  • Describe exceptional control flow and explain why it is central to systems programming.
  • Give examples of interrupts, traps, faults, and aborts.
  • Describe the parallels between signals for a process and processor exceptions/interrupts for the CPU.
  • Explain how exceptions alter the normal flow of control.
  • Describe how signals are used for communication between processes and the kernel.
  • Reason about the challenges of writing correct signal-handling code.

14. Networking

You should be able to:

  • List the layers of the five-layer Internet protocol stack and explain the role of each layer.
  • Explain how programs communicate over a computer network.
  • Describe the basic structure of a client-server application.
  • Explain the role of encapsulation in network protocols.
  • Identify the abstractions programmers use when writing networked applications.
  • Understand the basic ideas behind writing a simple server.

15. Midterm Review

You should be able to:

  • Integrate ideas from data representation, machine programming, processes, and concurrency.
  • Trace program behavior across multiple levels of abstraction.
  • Use systems concepts to explain both expected and surprising program behavior.
  • Practice applying course ideas to exam-style problems.

16. Security and Buffer Overflows

You should be able to:

  • Explain what a buffer overflow is and how it can occur in C programs.
  • Describe how the runtime stack can be exploited to alter program behavior.
  • Trace stack-frame layout in examples involving buffer overflow.
  • Explain how malicious code or unintended control flow can result from memory errors.
  • Discuss defenses against buffer overflow attacks and their limitations.

17. Unix I/O and Standard I/O

You should be able to:

  • Explain how Unix-based operating systems represent files.
  • Describe file descriptors and their role in Unix I/O, including the in-kernel per-process fle table, the global open-file table, and the representation files themselves (as vnodes).
  • List and describe basic Unix I/O system calls such as open, read, write, and close.
  • List and describe basic C Standard I/O library functions such as fopen, fread, fwrite, getchar, putchar, fprintf, fscanf, and fclose.
  • Contrast Unix I/O with the C Standard I/O library.
  • Explain the purpose and consequences of buffered I/O.
  • Reason about how open files and file descriptors are shared between parent and child processes.

18. Unix Filters, Pipes, and fgrep

You should be able to:

  • Explain the basic idea behind Unix filters and pipes.
  • Describe how programs can be composed using standard input and standard output.
  • Reason about the structure of a Unix filter such as fgrep.
  • Explain how buffered I/O affects the design and performance of file-processing programs.
  • Identify challenging implementation issues in a larger systems programming assignment.

19. Storage Devices

You should be able to:

  • Define and relate storage-related terms, like:
    • Block, sector, track, cylinder, and seek for hard disk drives
    • Page, block, overprovisioning, TRIM, and wear leveling for solid-state drives
  • Explain how the physical properties of hard disk drives affect access time.
  • Describe the major components of disk access latency.
  • Compare hard disk drives with solid-state drives at a high level.
  • Explain how flash memory changes the design space for persistent storage.
  • Reason about how operating systems interact with storage devices.

20. Memory Hierarchy and Cache Basics

You should be able to:

  • Summarize the tradeoffs among different kinds of computer memory and storage.
  • Explain the principle of locality and why it matters for performance.
  • Distinguish temporal locality from spatial locality.
  • Describe the basic organization of a cache.
  • Decompose a memory address to determine where data may appear in a cache.

21. Cache Organization and Cache Simulation

You should be able to:

  • Explain how memory addresses map blocks to cache sets.
  • Compare direct-mapped and set-associative caches.
  • Simulate cache behavior for a sequence of memory accesses.
  • Determine whether each access is a hit or a miss.
  • Calculate miss rate for a memory-access pattern.
  • Reason about how cache organization affects performance.

22. Virtual Memory Concepts

You should be able to:

  • Explain why virtual memory is useful as an abstraction of main memory.
  • Describe the roles of virtual addresses, physical addresses, pages, and page tables.
  • Explain the role of the memory management unit in address translation.
  • Describe the drawbacks of a simple base-register approach to virtual memory and use that to motivate paging.
  • Distinguish page-table hits from page faults.
  • Determine the number of bits used for page offsets, virtual page numbers, and physical page numbers in simplified examples.
  • Decompose a virtual address into its page offset and virtual page number components.
  • Decompose a physical address into its page offset and physical page number components.

23. TLBs, Address Translation, and Caches

You should be able to:

  • Review and apply the core concepts of virtual memory.
  • Explain how translation lookaside buffers speed up address translation.
  • Trace address translation through a TLB and page table.
  • Combine virtual-memory translation with cache lookup.
  • Practice reasoning across multiple layers of the memory hierarchy.

24. Program Performance and Machine-Independent Optimization

You should be able to:

  • Explain why performance depends on more than asymptotic complexity.
  • Identify source-level transformations that can improve performance.
  • Describe examples of machine-independent optimization.
  • Explain why procedure calls and memory aliasing can limit compiler optimization.
  • Begin reasoning about how code structure interacts with hardware performance.

25. Pipelining, Instruction-Level Parallelism, and Loop Optimization

You should be able to:

  • Contrast sequential and pipelined execution.
  • Explain the basic idea of instruction-level parallelism.
  • Describe how pipelining and out-of-order execution can improve processor throughput.
  • Reason about how data dependencies limit performance.
  • Explain how loop unrolling can expose more opportunities for parallel execution.
  • Analyze when a code transformation is likely to improve performance and when it may not.

(When logged in, completion status appears here.)