CS 134

Key Points: Scheduling Algorithms

Process Arrival Times and I/O Wait

  1. Arrival Time: Processes may enter the system at different times, affecting scheduling decisions
  2. I/O Wait: Processes may become blocked waiting for I/O
  3. Compute-bound vs I/O-bound Processes:
    • Compute-bound: Spend most time doing computation
    • I/O-bound: Spend significant time waiting for I/O
      • Keeping I/O-bound processes waiting to run after I/O completes may reduce I/O throughput

Multi-Level Feedback Queue (MLFQ)

  1. Definition: Uses multiple queues with different priority levels and time slices
  2. Characteristics:
    • Attempts to optimize for both short and long jobs
    • Adapts to job behavior without prior knowledge
    • Can suffer from starvation without periodic boosting
    • Complexity can make tuning difficult

MLFQ Rules

  1. Higher priority queues run first
  2. Within a queue, round-robin scheduling is used
  3. New jobs start at highest priority
  4. Jobs move to lower priority after using their time allotment
  5. Periodically boost all jobs to highest priority to prevent starvation

Multicore Scheduling

  1. Shared-Queue Design:
    • Multiple cores access the same scheduler queues
    • Requires careful synchronization (typically spinlocks)
    • Can lead to contention issues
  2. Per-Core Queue Design:
    • Each core has its own set of queues
    • Reduces contention but introduces load balancing challenges
    • Requires consideration of cache affinity when migrating processes
  3. Process Migration:
    • Pull Migration (Work-Stealing): Idle cores seek work from busy cores
    • Push Migration (Work-Gifting): Busy cores offload work to idle cores

Remember

  • Real-world scheduling involves processes arriving at different times and becoming blocked for I/O
  • MLFQ attempts to balance the needs of short, interactive jobs and long-running batch processes
  • MLFQ uses past behavior to adapt job priorities and time slices, assuming that the past is a good predictor of the future
  • Tuning scheduler parameters is crucial but can be complex (e.g, in MLFQ)
  • Multicore scheduling introduces new challenges, particularly around load balancing and cache affinity
  • There are many trade-offs in designing and implementing scheduling algorithms; we can only assess performance with representative workloads and adequate measurement
  • Be able to explain the behavior of different scheduling algorithms and their impact on system performance

(When logged in, completion status appears here.)