CS 134

Simple Process Placement in Memory

Single-Partition Memory Management

Let's begin by thinking about some of the simpler ways to place processes in memory. The simplest way to run programs is to only allow a single program to run at a time. We call that single-partition memory management, because we just have one space in memory where the program can live.

User Space (768 KB) OS (256 KB)
Single Partition Memory Management

Even in this very simple world, there are some things we need to be careful about. Even if we only run one program at a time, after this program finishes, we'll likely want to run another one. Thus it's important to ensure that the running program doesn't accidentally overwrite the operating system code or its data structures.

  • Cat speaking

    Yes, you've mentioned that before, that's the whole user space and kernel space thing, right?

  • PinkRobot speaking

    Right.

  • Hedgehog speaking

    And we need to provide protection so that the user program can't mess with the operating system, right? But we never actually talked about how that would work…

A Simple Scheme for Memory Protection

Imagine it's the distant past, and you need to convince some CPU designers to add support to the CPU for memory protection. You just want to support the scenario we've described, where we have a single program running at a time. What would you ask for to provide memory protection?

  • Duck speaking

    I have an idea! We could have two modes, user mode and kernel mode. In user mode, we could only allow access to a certain range of memory. And we can only change the range when in kernel mode.

  • Horse speaking

    But how would we specify that range?

A simple design for memory protection is to add a way to specify a range of memory that can be accessed, which often involves using base and limit registers. The base register specifies the start of the range, and the limit register specifies the size of the range. Only kernel-mode code can change the values in these registers.

Here's the design expressed as a diagram:

Processor base base + limit address < Memory TRAP
Memory Protection using Base and Limit Registers
  • Goat speaking

    Meh. But we can only run one program at a time. Weak.

  • PinkRobot speaking

    Actually, even in this very simple system, we could run multiple programs at once.

  • Horse speaking

    Whoa! How would that work?

Swapping

Running multiple programs at once when we only have one memory area to hold a program requires a technique called swapping. Swapping is the process of moving a program from memory to disk, and then moving another program from disk to memory.

User Space (768 KB) OS (256 KB) P 1 P 2 Swap out Swap in
Single Partition Memory Management with Swapping

One issue with this approach is that while we're copying a program to or from disk, we can't do any useful work.

  • Rabbit speaking

    Fun fact. Even though swapping out whole programs is rarely done today, the term “swap” is still used to refer to moving parts of programs between memory and disk, and the area of disk used for this is still called the “swap space”.

  • PinkRobot speaking

    But we'll get to that later!

  • Goat speaking

    Meh. One program space still seems pretty weak to me.

Multiple-Partition Memory Management (Fixed Partitions)

Another way to run multiple programs at once is to divide the system's memory into multiple partitions, which we call multiple-partition memory management. For now, let's divide memory equally into three partitions.

OS (256 KB) Process 1 (384 KB) Process 2 (384 KB) Process 3 (384 KB)
Fixed Partition Memory Management
  • Goat speaking

    Meh. Three whole programs.

  • PinkRobot speaking

    Combined with swapping, we can actually run more than three programs at once. And while we're copying a program in or out of memory, we can still run the other programs.

  • Cat speaking

    Do we have to copy a program back to the same place in memory that it used to be when we swap it back in?

  • PinkRobot speaking

    It depends…

Logical Addressing

As we've specified things so far, we're using what's called absolute addressing: a program knows where in physical memory it is. It can tell if it's in slot 0, 1, or 2. If we swap it out and then swap it back in, we could put it back in the same slot.

However a very small change to our base-and-limit–register design can allow an alternative, logical addressing, where a program thinks it lives at address zero no matter where it actually is in memory.

Processor base limit logical addr. + < Memory TRAP
Logical Addressing

What changed from the initial design of base and limit registers to allow logical addressing?

The process of mapping a logical address to a physical address is called address translation.

  • Rabbit speaking

    Incidentally, address translation is another reason that it's so important for an OS kernel to use functions such as copyin and copyout. A logical address provided by a user program can easily be different from the physical address the kernel needs to use.

  • Horse speaking

    Whoa! So we can run more programs at once, and we can put them back in different places in memory? This is getting interesting!

Issues with Fixed Partitions

Dividing our memory into three fixed partitions was a simple approach, but it's wasteful. Small programs don't need all the space we've given them, and large programs can't run if they don't fit in a partition.

OS (256 KB) Process 1 (128 KB) Process 2 (256 KB) Process 3 (256 KB)
Internal Fragmentation

In that diagram, the very faded green color shows wasted space inside a process—the memory was allocated to the process, but the process doesn't use it.

OS (256 KB) Process 1 (128 KB) Process 2 (256 KB) Process 3 (256 KB) Process 4 (512 KB)
Can't Run Large Processes

Here we can see that the large process, which wants two-thirds of the machine's memory, can never run, even if all the other processes are done, because there's no partition big enough to hold it.

  • Duck speaking

    But we know how to do this right! We saw it on the previous lesson page! We can just allocate dynamically sized chunks of memory. We'll have external fragmentation, but we'd be able to run these four programs at the same time some of the time, rather than never.

  • PinkRobot speaking

    Exactly. We'll get to that next.

Variable-Partition Memory Management

Variable-partition memory management is exactly the approach we looked at on the previous lesson page.

OS (256 KB) Process 1 (128 KB) Process 2 (256 KB) Process 3 (256 KB) Process 4 (512 KB)
Variable-Partition Memory Management
  • Dog speaking

    Awesome. Problem solved! We could even compact memory to get rid of the external fragmentation, because with logical addressing the programs wouldn't know the difference.

  • Cat speaking

    And with swapping, we can swap out programs to disk to run more code than would fit in memory.

  • Goat speaking

    So we can all go home. Problem solved.

  • PinkRobot speaking

    Not quite. It turns out this approach is somewhat wasteful…

(When logged in, completion status appears here.)