Key Points: Scheduling Algorithms
Process Arrival Times and I/O Wait
- Arrival Time: Processes may enter the system at different times, affecting scheduling decisions
- I/O Wait: Processes may become blocked waiting for I/O
- Compute-bound vs I/O-bound Processes:
- Compute-bound: Spend most time doing computation
- I/O-bound: Spend significant time waiting for I/O
- Keeping I/O-bound processes waiting to run after I/O completes may reduce I/O throughput
Multi-Level Feedback Queue (MLFQ)
- Definition: Uses multiple queues with different priority levels and time slices
- Characteristics:
- Attempts to optimize for both short and long jobs
- Adapts to job behavior without prior knowledge
- Can suffer from starvation without periodic boosting
- Complexity can make tuning difficult
MLFQ Rules
- Higher priority queues run first
- Within a queue, round-robin scheduling is used
- New jobs start at highest priority
- Jobs move to lower priority after using their time allotment
- Periodically boost all jobs to highest priority to prevent starvation
Multicore Scheduling
- Shared-Queue Design:
- Multiple cores access the same scheduler queues
- Requires careful synchronization (typically spinlocks)
- Can lead to contention issues
- Per-Core Queue Design:
- Each core has its own set of queues
- Reduces contention but introduces load balancing challenges
- Requires consideration of cache affinity when migrating processes
- Process Migration:
- Pull Migration (Work-Stealing): Idle cores seek work from busy cores
- Push Migration (Work-Gifting): Busy cores offload work to idle cores
Remember
- Real-world scheduling involves processes arriving at different times and becoming blocked for I/O
- MLFQ attempts to balance the needs of short, interactive jobs and long-running batch processes
- MLFQ uses past behavior to adapt job priorities and time slices, assuming that the past is a good predictor of the future
- Tuning scheduler parameters is crucial but can be complex (e.g, in MLFQ)
- Multicore scheduling introduces new challenges, particularly around load balancing and cache affinity
- There are many trade-offs in designing and implementing scheduling algorithms; we can only assess performance with representative workloads and adequate measurement
- Be able to explain the behavior of different scheduling algorithms and their impact on system performance
(When logged in, completion status appears here.)