CS 134

Disk Scheduling

  • Goat speaking

    Meh. We've got this fancy disk with all its tracks and sectors, but it's sooo sloooow when we're trying to read a bunch of stuff.

  • Duck speaking

    Yeah—when I was playing with the simulator, I noticed that the order we read things made a huge difference in how long it took.

  • PinkRobot speaking

    That's exactly right! Let's look at how we can be smarter about the order we do our disk operations.

  • Hedgehog speaking

    I kind of have a sense of what the problem is, but maybe we could have a visualization to help us understand the issues better?

  • PinkRobot speaking

    You bet!

Disk Scheduling Visualized

Much as we used page traces when we looked at paging algorithms, here we'll use a disk-access trace to visualize how different disk scheduling algorithms work. This time, we won't use a trace drawn from a real working system (although our own Prof. Kuenning actually maintains such traces as part of his research work); instead we'll use a synthetic trace with the following properties:

  • Most accesses are in the front area of the disk, since that's the area with the highest data density and fastest transfer rates.
  • Some accesses are at the back of the disk.
  • Sometimes consecutive accesses are made to a sequence of adjacent logical blocks.
  • You can change the random seed to get a different trace.

If you scroll down and take a peek at the visualization, you'll also see that there's a drop-down menu to select the disk scheduling algorithm to use, but at the moment there is only one option: First-Come, First-Served (FCFS).

  • Horse speaking

    Hay! I get that the red dots are when we're accessing stuff, and the blue lines show where the head has been, but what are the gray dots and gray dotted lines?

  • PinkRobot speaking

    They show when requests arrive. The length of the dotted line shows how long the request has been waiting.

  • Cat speaking

    Oh, and if the lines get longer and longer as we go left to right, that shows that we're not really keeping up with the requests, right?

  • PinkRobot speaking

    That's right! In the graph below, we're not doing very well.

Interactive Disk Scheduling Simulator

Scheduling Strategy:

Exploring FCFS

Looking at the graph in the simulator above, what do you notice about the order of disk accesses? If you notice a problem, do you have ideas on how to fix it?

  • Hedgehog speaking

    Ah, I see it now! That up-and-down zigzag pattern is the head moving back and forth across the disk, right? It reminds me of someone trying to multitask and getting nothing done. Just stay where you are and focus!

  • Cat speaking

    Perhaps we could turn that into an algorithm. “Focus” on the requests that are closest to you, and only move when you have to.

  • PinkRobot speaking

    And that leads us to our next scheduling algorithm: Shortest Seek Time First (SSTF).

Shortest Seek Time First (SSTF)

The SSTF algorithm works as follows:

  • Always choose the request that is closest to the current head position.
  • If there are multiple requests at the same distance, choose the one that arrived first.

Click the button below to enable the SSTF scheduling algorithm in the drop-down menu.


Scroll back up and contrast SSTF with FCFS by toggling the drop-down menu between the two. Try different random seeds, and then try

  1. 32219
  2. 8519

The first seed really emphasizes the difference between the two algorithms, while the second seed leads to more subtle differences. What do you notice about the differences between the two algorithms?

  • Hedgehog speaking

    Oh, that's so much better! The head is moving much less, and it's getting things done faster. Like I said, it's better when you're focused and organized.

  • Horse speaking

    Hay! What about what seed 8519 shows? In that case, SSTF is a bit too focused. It's like it's so busy with the stuff right in front of it that it forgets about the stuff in the back. That second request in the upper half of the disk only got a chance to run when there was a lull in new requests. That's not good!

  • Pig speaking

    I think this algorithm is prone to STARVATION!!!

SSTF is, indeed, prone to starvation. If we have a lot of requests hitting one area of the disk, requests in other areas may never get serviced. That said, for starvation to be an issue, we have to have a continuous stream of requests coming in, overwhelming the disk. If there is ever a break in the requests, the disk head will eventually get to the requests that are far away.

  • Pig speaking

    It still seems pretty bad. The last thing you want when you're overwhelmed is to be STARVING, too!

  • Cat speaking

    Well, what if instead of being totally focused on what's closest, we picked a direction and handled everything in our path? Head up towards the high tracks, and then come back down. That way, we're not ignoring anything, but we're still being efficient.

  • Duck speaking

    Like an elevator! It goes up, stopping at every floor where someone pressed the button, then down doing the same thing.

  • PinkRobot speaking

    That's exactly right! This leads us to our next algorithm, commonly known as the Elevator algorithm or “SCAN”.

The Elevator Algorithm (“SCAN”)

The SCAN algorithm works like an elevator:

  • Pick a direction (up or down)
  • Service all requests in that direction until you reach the end of the disk
  • Reverse direction and repeat

Click the button below to enable the SCAN algorithm in the drop-down menu.


Try out the SCAN algorithm by enabling it in the drop-down menu. What do you notice about the head-movement pattern compared to SSTF? Some interesting seeds to try are

  1. 65674
  2. 8519

Is there an obvious fix for any issues you see?

  • Duck speaking

    SCAN might avoid starvation, but it seems so wasteful. It's like it's going up and down the disk for no reason. Real-world elevators don't do that!

  • Dog speaking

    Except when I press all the buttons on my way out.

  • Goat speaking

    Meh. Just skip the wasted work!

  • L-Chippy speaking

    Fun fact: Prof. Melissa actually struggled to code this one precisely because sending the heads to the first or last track of the disk when there is no corresponding disk access that is actually needed broke all her simulation abstractions. In fact, it's still a little glitchy.

  • L-Floppy speaking

    And it turns out that actual modern disks don't even have a way to make the head go somewhere without a corresponding disk access. So abandoning this algorithm is a good idea!

The LOOK Algorithm

The LOOK algorithm is a variant of SCAN that doesn't move the heads all the way to the end of the disk. Instead, the heads reverse direction when there are no more requests in the current direction.


Try out the LOOK algorithm by enabling it in the drop-down menu. What do you notice about the head-movement pattern compared to SCAN? Some interesting seeds to try are

  • 1711
  • 8519

Also, noticing that these algorithms perform reads on the way down as well as the way up, do you see any challenges we might face if the drive uses LBA addressing?

  • Cat speaking

    Okay, what was the LBA issue? I didn't see anything wrong with the LOOK algorithm.

  • PinkRobot speaking

    When we're going back down, we want to go through tracks in decreasing order. If our disk uses LBA addressing, we can go through LBA addresses in decreasing order, but that means we'll go through both tracks and sectors in decreasing order. That's not what we want, because reading sectors in reverse order is the worst possible choice for seeking the next sector, requiring almost a full disk rotation to get there.

  • Horse speaking

    Hay! Trying to figure that out, I noticed something else—requests for tracks in the middle of the disk seem to get serviced faster than requests at either end.

  • Dog speaking

    Yeah! The head passes through the middle tracks twice—once going up and once going down—before it gets back to either end.

  • Duck speaking

    So it's like flagging down a mail truck on a dead-end street. If you live in the middle of the street, you'll get two chances, once when it heads out and once when it comes back. But if you're at the far end of the street, you only get one chance.

  • PinkRobot speaking

    That's exactly right! This bias toward middle tracks is one reason we have variants called C-SCAN and C-LOOK…

Circular SCAN (C-SCAN) and Circular LOOK (C-LOOK)

These variants of SCAN and LOOK avoid trying to handle requests in the reverse direction. Instead, they always move in one direction, servicing requests as they go, and then jump back to the beginning of the disk.


Feel free to interact with the visualization to see how these algorithms work!

Real-World Disk Controllers

  • Goat speaking

    All these algorithms are great, but modern disks are pretty smart. Do operating systems really need to do all this?

  • Cat speaking

    Yeah, and what about caching? My SSD definitely has a cache.

  • PinkRobot speaking

    Good questions! The reality is a bit more complex than our simple algorithms suggest…

Modern disk controllers are sophisticated devices that maintain their own command queues and caches. However, they're also somewhat secretive about their internal workings—manufacturers consider their optimizations to be trade secrets!

Native Command Queuing (NCQ)

Modern SATA/SAS disks generally support the Native Command Queuing (NCQ) extension to the Serial ATA protocol, which allows the operating system to queue multiple read/write requests to the controller, which is then free to reorder these requests based on various parameters, including

  • Physical positioning of the head
  • Rotational position of the disk
  • Its own internal caching strategy
  • Mysterious secret sauce from the manufacturer
  • Goat speaking

    Meh. So the disk controller is doing its own scheduling? Then why did we learn all these algorithms?!!?

  • L-Floppy speaking

    It's like a partnership—the OS and disk controller both try to optimize things…

  • PinkRobot speaking

    That's right! The OS still needs to make intelligent decisions about what to send to the disk and when.

Tagged Command Queuing (TCQ)

Before Serial ATA was a thing, some SCSI and Parallel ATA (IDE) drives had a similar technology called Tagged Command Queuing (TCQ), which allowed the OS to submit multiple read/write requests to a queue maintained by the disk controller.

The tags specify how the controller should deal with the requests; with SCSI, they specify whether a request should be put at the head of the queue, pushing everything else aside, should be serviced in the order that they were made, or can be serviced in any order.

Because the controller sends interrupts after each task is completed, the bus type can make a huge difference in whether or not TCQ improves performance. In particular, the limitations of the old ISA bus would require a CPU interrupt to tell the CPU to queue requests and then another when a task was done. Later buses (PCI and onward) provided support for direct-memory access, so could handle most interrupts without bothering the CPU.

Because the ATA bus was originally implemented using a cut-down form of ISA bus, software compatibility requirements meant that Parallel ATA drives were forced to go through the CPU to do DMA, thus causing many more CPU interrupts and slowing thhe system down.

SATA was designed with support for disk-request queuing (NCQ) in mind from the start, allowing the host-bus adapter to handle much of the DMA activity and the drives to batch completed request interrupts to reduce overhead.

NCQ doesn't support the tagged queuing that TCQ did, so it's possible to have conflicts between the operating system's disk-scheduling system and the disk controller's queue manipulation.

SATA SSDs, which don't rely on mechanical devices to access different areas of storage, can get a major boost from NCQ when doing parallel I/O. NVMe SSDs have even greater queue depths, and thus larger speed gains.

Disk Caching and Write Buffering

Most modern disks have substantial RAM caches (often 32 MB to 256 MB). These caches serve several purposes:

  • Read-ahead caching: When reading sequential blocks, the disk prefetches additional blocks that might be needed
  • Write buffering: Writes can be acknowledged quickly and actually written to disk later
  • Command-reordering buffer: Gives the controller room to optimize the order of operations
  • Hedgehog speaking

    Wait, if the disk says a write is done but it's really just in cache, what happens if the power fails?

  • PinkRobot speaking

    That's a serious concern! Enterprise controller cards often have batteries or capacitors to allow cached writes to complete or to survive short-term power losses. Some enterprise SSDs even have their own caches and built-in capacitors.

  • Hedgehog speaking

    Now I'm thinking about how disks are these crazy mechanical devices that can fail. I hope my SSD is more reliable!

Self-Monitoring And Reporting Technology (SMART)

Modern disks also monitor themselves for signs of impending failure. SMART attributes might include

  • Number of reallocated sectors
  • Seek-error rates
  • Spin-up time
  • Temperature
  • Horse speaking

    Hay! What's a reallocated sector?

  • PinkRobot speaking

    When a disk detects a sector that's gone bad, it can move the data to a spare sector. An increasing reallocated-sector count is a sign that the disk is starting to fail.

One of the advantages of LBA addressing is that the drive itself controls the mapping of logical sectors to physical ones, so if there's a sector that's problematic, the drive can remap it to a spare sector without the OS even knowing. (Via SMART, the drive will say how many sectors have been remapped, but not which sectors.)

Given what we've learned about modern disk controllers,

  1. Why might an OS want to send requests to the disk in batches rather than one at a time?
  2. What factors should the OS consider when deciding how many requests to queue up?
  3. How might disk caching affect our scheduling decisions?
  • Cat speaking

    This got complicated fast! I guess that's why it's good we learned the basic algorithms first—they help us understand what the disk controller might be doing.

  • Goat speaking

    Yeah, but I still think SSDs are better.

  • PinkRobot speaking

    We'll talk about SSDs next time; they have their own interesting challenges!

(When logged in, completion status appears here.)