CS 105

Exceptional Control Flow

The Big Idea

Your program isn't alone, and it isn't fully in charge. The hardware, the OS, and other processes can all alter its control flow for reasons both mundane and dramatic. Understanding how starts with one mechanism, invented once and reused three times.

Act 1: Hardware — Inventing Interrupts

The Problem

A CPU running a game (or anything) needs input from devices, but devices are slow. We have three options, each better than the last:

Blocking
wait for the device. Everything else freezes. Turn-based Snake.
Polling
check a device register periodically. Better, but wasteful (almost always "nothing") and lossy — a quick left-then-up can lose the left entirely if it happens between checks.
Interrupts
the device taps the CPU on the shoulder via a dedicated wire (the interrupt request line). The CPU never wastes time asking, and no events are lost.

The Mechanism

When the interrupt line fires, the CPU:

  1. Finishes the current instruction
  2. Saves context (PC, registers)
  3. Reads the device's interrupt ID
  4. Looks up the handler in a vector table (just an array of function pointers!)
  5. Runs the handler
  6. Restores context and continues where it left off

Same Trick, Three Uses

Once you have this mechanism, you can reuse it:

Interrupt
an external device needs attention (keyboard, disk, timer)

Fault / Abort ; the CPU catches its own error during execution (divide by zero, bad memory access). Same save → lookup → handle → return flow.

Trap (Syscall)
a program triggers the mechanism on purpose. This is the only way out of user mode.

Protection and the Kernel

On a system running multiple programs, unrestricted I/O access is dangerous — any program could trash the disk or snoop on other programs' input. So the CPU gets two modes:

User mode*
no direct I/O, restricted memory access
Kernel mode
full access to everything

The kernel is just the code that runs in kernel mode. A system call is a deliberate trap — the program says "I need something I'm not allowed to do myself" and the interrupt mechanism switches to kernel mode, runs the kernel's handler, and switches back.

Act 2: Signals — Play-Town Interrupts

The Idea

Processes are pretend machines — they think they have their own CPU and memory. If they're pretending to be a machine, they should get pretend interrupts too. Those are signals.

The parallel is exact:

Hardware Interrupt Signal
Event occurs Device pulls interrupt line Kernel sets pending signal
CPU/process notified Between instructions On return from kernel mode
Default action Built-in ISR Terminate, stop, ignore, etc.
Custom handler Program the vector table Install with signal()
After handler Restore context, continue Restore context, continue

What Triggers a Signal?

  • Errors — your process did something the CPU caught (SIGSEGV, SIGFPE)
  • User actions — Ctrl-C (SIGINT), Ctrl-Z (SIGTSTP)
  • Other processeskill() sends any signal you choose
  • Lifecycle — child process finished (SIGCHLD), terminal disconnected (SIGHUP)
  • Timers — SIGALRM, delivered after a requested delay

When Are Signals Delivered?

Not truly between any two instructions like hardware interrupts. The kernel delivers pending signals when a process transitions back from kernel mode (returning from a syscall or interrupt). The process thinks it was interrupted at an arbitrary point, but there's a gatekeeper.

Signal Handlers Can Return

A handler doesn't have to call exit(). It can set a flag and return, and the program picks up where it left off — just like a hardware interrupt handler.

This makes signals a form of concurrency: the handler is a separate flow of control that shares state with the main program. Which means:

  • volatile is needed for shared variables — same reason as with threads. The compiler can't see across the handler boundary and may cache a variable in a register, never noticing the handler changed it.

  • EINTR — if the process was blocked in a syscall (like read()) when the signal arrived, the syscall can't transparently resume (that was the Multics dream — correct but nightmarishly complex to implement). Unix chose "worse is better": the syscall bails out, returns -1, sets errno to EINTR, and your code has to retry:

    while ((n = read(fd, buf, size)) == -1 && errno == EINTR) {
        ;  // interrupted, try again
    }
    

    This pushes complexity to the caller, but it's visible complexity — honest rather than hidden.

Interactive Materials

(When logged in, completion status appears here.)