CS 134

Key Points: Page Replacement Revisited

Approximating LRU

  1. Challenge with True LRU:
    • Requires updating page order on every page access
    • Too expensive in terms of CPU overhead
    • Need practical approximations
  2. Clock Algorithm:
    • Uses single reference bit per page
    • Circular scan of frames looking for unreferenced pages
    • Pages get “second chance” if referenced
    • Auto-advancing clock hand can improve performance
  3. Implementation Considerations:
    • Reference bit cleared when clock hand passes
    • Pages safe if referenced between passes
    • Balance between clearing frequency and accuracy

Managing Dirty Pages

  1. Write Handling:
    • Dirty bit tracks modified pages
    • Must write modified pages to disk before reuse
    • Writes add significant overhead to page replacement
  2. Swap Space:
    • Dedicated disk space for paging
    • Required for storing modified pages
    • Historical name from early Unix systems
  3. Performance Impact:
    • Disk writes take significant time
    • Need to track both referenced and dirty state
    • Can stall if replacement frame not ready

Advanced Page Replacement

  1. Two-Handed Clock:
    • Separate hands for clearing bits and page replacement
    • Maintains specific number of unreferenced pages
    • Can reduce disk write frequency
  2. Mach Queue-Based Approach:
    • Three queues: active, inactive, and free
    • Pages progress through queues based on use
    • System maintains target ratios between queues
  3. Strategy Comparisons:
    • Clock and Queue-based approach approximate LRU well
    • Queue-based approach good at detecting program phases
    • Different strategies balance page faults vs. disk writes

Other Memory Management Features

  1. Memory-Mapped Files:
    • Map file contents directly into memory
    • Pages loaded on demand when accessed
    • Changes written back to file automatically
    • Efficient way to access file content
  2. Shared Memory:
    • Read-only code pages shared between processes
    • Writable pages can be shared via memory mapping
    • Reduces memory overhead for common libraries
  3. System Adaptations:
    • Purgeable memory for low-memory situations
    • Process termination as last resort (OOM killer)
    • Speculative page loading for sequential access
    • Copy-on-write for efficient process forking

Remember

  • Pure LRU is impractical; approximations work well in practice
  • Dirty pages require special handling and increase overhead
  • Multiple viable strategies exist (Clock, Two-Handed, Queue-based)
  • Modern systems use sophisticated hybrid approaches
  • Memory mapping and sharing reduce overall memory usage
  • Systems must balance memory usage against performance
  • Copy-on-write and purgeable memory provide additional flexibility

(When logged in, completion status appears here.)