Key Points: Page Replacement Revisited
Approximating LRU
- Challenge with True LRU:
- Requires updating page order on every page access
- Too expensive in terms of CPU overhead
- Need practical approximations
- Clock Algorithm:
- Uses single reference bit per page
- Circular scan of frames looking for unreferenced pages
- Pages get “second chance” if referenced
- Auto-advancing clock hand can improve performance
- Implementation Considerations:
- Reference bit cleared when clock hand passes
- Pages safe if referenced between passes
- Balance between clearing frequency and accuracy
Managing Dirty Pages
- Write Handling:
- Dirty bit tracks modified pages
- Must write modified pages to disk before reuse
- Writes add significant overhead to page replacement
- Swap Space:
- Dedicated disk space for paging
- Required for storing modified pages
- Historical name from early Unix systems
- Performance Impact:
- Disk writes take significant time
- Need to track both referenced and dirty state
- Can stall if replacement frame not ready
Advanced Page Replacement
- Two-Handed Clock:
- Separate hands for clearing bits and page replacement
- Maintains specific number of unreferenced pages
- Can reduce disk write frequency
- Mach Queue-Based Approach:
- Three queues: active, inactive, and free
- Pages progress through queues based on use
- System maintains target ratios between queues
- Strategy Comparisons:
- Clock and Queue-based approach approximate LRU well
- Queue-based approach good at detecting program phases
- Different strategies balance page faults vs. disk writes
Other Memory Management Features
- Memory-Mapped Files:
- Map file contents directly into memory
- Pages loaded on demand when accessed
- Changes written back to file automatically
- Efficient way to access file content
- Shared Memory:
- Read-only code pages shared between processes
- Writable pages can be shared via memory mapping
- Reduces memory overhead for common libraries
- System Adaptations:
- Purgeable memory for low-memory situations
- Process termination as last resort (OOM killer)
- Speculative page loading for sequential access
- Copy-on-write for efficient process forking
Remember
- Pure LRU is impractical; approximations work well in practice
- Dirty pages require special handling and increase overhead
- Multiple viable strategies exist (Clock, Two-Handed, Queue-based)
- Modern systems use sophisticated hybrid approaches
- Memory mapping and sharing reduce overall memory usage
- Systems must balance memory usage against performance
- Copy-on-write and purgeable memory provide additional flexibility
(When logged in, completion status appears here.)