Simple Process Placement in Memory
Single-Partition Memory Management
Let's begin by thinking about some of the simpler ways to place processes in memory. The simplest way to run programs is to only allow a single program to run at a time. We call that single-partition memory management, because we just have one space in memory where the program can live.
Even in this very simple world, there are some things we need to be careful about. Even if we only run one program at a time, after this program finishes, we'll likely want to run another one. Thus it's important to ensure that the running program doesn't accidentally overwrite the operating system code or its data structures.
Yes, you've mentioned that before, that's the whole user space and kernel space thing, right?
Right.
And we need to provide protection so that the user program can't mess with the operating system, right? But we never actually talked about how that would work…
A Simple Scheme for Memory Protection
I have an idea! We could have two modes, user mode and kernel mode. In user mode, we could only allow access to a certain range of memory. And we can only change the range when in kernel mode.
But how would we specify that range?
A simple design for memory protection is to add a way to specify a range of memory that can be accessed, which often involves using base and limit registers. The base register specifies the start of the range, and the limit register specifies the size of the range. Only kernel-mode code can change the values in these registers.
Here's the design expressed as a diagram:
Meh. But we can only run one program at a time. Weak.
Actually, even in this very simple system, we could run multiple programs at once.
Whoa! How would that work?
Swapping
Running multiple programs at once when we only have one memory area to hold a program requires a technique called swapping. Swapping is the process of moving a program from memory to disk, and then moving another program from disk to memory.
One issue with this approach is that while we're copying a program to or from disk, we can't do any useful work.
Fun fact. Even though swapping out whole programs is rarely done today, the term “swap” is still used to refer to moving parts of programs between memory and disk, and the area of disk used for this is still called the “swap space”.
But we'll get to that later!
Meh. One program space still seems pretty weak to me.
Multiple-Partition Memory Management (Fixed Partitions)
Another way to run multiple programs at once is to divide the system's memory into multiple partitions, which we call multiple-partition memory management. For now, let's divide memory equally into three partitions.
Meh. Three whole programs.
Combined with swapping, we can actually run more than three programs at once. And while we're copying a program in or out of memory, we can still run the other programs.
Do we have to copy a program back to the same place in memory that it used to be when we swap it back in?
It depends…
Logical Addressing
As we've specified things so far, we're using what's called absolute addressing: a program knows where in physical memory it is. It can tell if it's in slot 0, 1, or 2. If we swap it out and then swap it back in, we could put it back in the same slot.
However a very small change to our base-and-limit–register design can allow an alternative, logical addressing, where a program thinks it lives at address zero no matter where it actually is in memory.
The process of mapping a logical address to a physical address is called address translation.
Incidentally, address translation is another reason that it's so important for an OS kernel to use functions such as
copyinandcopyout. A logical address provided by a user program can easily be different from the physical address the kernel needs to use.
Whoa! So we can run more programs at once, and we can put them back in different places in memory? This is getting interesting!
Issues with Fixed Partitions
Dividing our memory into three fixed partitions was a simple approach, but it's wasteful. Small programs don't need all the space we've given them, and large programs can't run if they don't fit in a partition.
In that diagram, the very faded green color shows wasted space inside a process—the memory was allocated to the process, but the process doesn't use it.
Here we can see that the large process, which wants two-thirds of the machine's memory, can never run, even if all the other processes are done, because there's no partition big enough to hold it.
But we know how to do this right! We saw it on the previous lesson page! We can just allocate dynamically sized chunks of memory. We'll have external fragmentation, but we'd be able to run these four programs at the same time some of the time, rather than never.
Exactly. We'll get to that next.
Variable-Partition Memory Management
Variable-partition memory management is exactly the approach we looked at on the previous lesson page.
Awesome. Problem solved! We could even compact memory to get rid of the external fragmentation, because with logical addressing the programs wouldn't know the difference.
And with swapping, we can swap out programs to disk to run more code than would fit in memory.
So we can all go home. Problem solved.
Not quite. It turns out this approach is somewhat wasteful…
(When logged in, completion status appears here.)