Virtual Memory
- Main Objective: Allow a process to execute even when its size exceeds the available physical memory.
- Early Days: Techniques like "overlay" required programmers to manage dynamic loading.
Modern Day: The OS supports Virtual Memory, relieving programmers of this burden.
Advantages:
- Small memory spaces can be fully utilized, improving memory usage.
- Maximizing multiprogramming degree enhances CPU utilization (except in the case of thrashing).
- Reduced I/O transfer time, as not all program pages need to be loaded at once.
Loading an entire program increases the number of I/O operations, which increases I/O time.
Demand Paging
- Definition: Demand paging is built on the concept of Page Memory Management (Paging) using a lazy swapper technique. At the start, not all pages of the program are loaded into memory, only those required for execution (i.e., prepaging) or sometimes none at all (i.e., pure demand paging), allowing the process to execute.
- If all required pages are in memory → the process runs correctly.
- If a process tries to access a page that is not in memory → a "Page Fault" interrupt is triggered, prompting the OS to load the missing page.
- A Valid/Invalid Bit is added to the page table to indicate whether a page is in memory:
- V: Page is in memory.
- I: Page is not in memory.
- OS: Sets and changes this status.
- MMU: Only references it and raises an interrupt if needed.
Copy-on-Write
Fork() without Copy-on-Write
- Definition: In traditional
fork()
(without copy-on-write), after a parent process creates a child process, the child occupies different memory space, and its code and data sections are copied from the parent. - Disadvantages:
- If many child processes are created, each requires new frames, consuming a lot of memory.
- Copying the parent’s code/data for the child can be time-consuming and sometimes unnecessary (e.g., if the child immediately executes
execlp()
).
Fork() with Copy-on-Write
Copy-on-write (COW) is an optimization strategy where multiple callers share the same resources (e.g., memory or disk storage) until one tries to modify the resource, at which point a copy is made.
- Definition: After the parent process creates a child, they initially share the parent’s memory frames without allocating new frames for the child.
- Advantages:
- Significantly reduces memory demand.
- New frames are only allocated when the child modifies the page, improving efficiency.
- Both parent and child processes can run concurrently, requiring MMU hardware support for this effect.
vfork() without Copy-on-Write (Virtual Memory Fork)
- Definition: Except for the stack, the parent and child processes share all memory.
- Scenario: The child immediately executes
execlp()
after being created, skipping copy-on-write. It is often used in OS without MMU.
Effective Access Time (EAT) and Page Fault Ratio in Virtual Memory
- EAT Formula:
[
EAT = (1 – p) \times \text{memory access time} + p \times (\text{page fault overhead} + \text{swap out} + \text{swap in} + \text{restart overhead})
]
- p: Page fault ratio.
To improve virtual memory efficiency, the goal is to reduce EAT by minimizing the page fault ratio.
Factors affecting page fault ratio:
Page Replacement
- Definition: When a page fault occurs, and no free frame is available, the OS must replace a page. The OS selects a "victim page" to swap out to disk, freeing a frame for the missing page.
Page Replacement Policies:
- Local Replacement: The OS selects a victim page only from the process's frames, reducing thrashing but limiting memory utilization.
- Global Replacement: The OS can select any page from any process, improving memory utilization but increasing the chance of thrashing.
Page Replacement Algorithms
FIFO (First In, First Out)
- Definition: The oldest page is selected as the victim page.
- Analysis:
- Simple and easy to implement.
- High page fault ratio.
- May cause Belady’s Anomaly (where increasing frames increases page faults).
OPT (Optimal Page Replacement)
- Definition: Replaces the page that won’t be used for the longest time in the future.
- Analysis:
- Optimal but impossible to implement (relies on future knowledge).
- Has the lowest page fault ratio.
- Exhibits stack property (no Belady's Anomaly).
LRU (Least Recently Used)
- Definition: Replaces the least recently used page.
- Analysis:
- Effective but costly to implement (requires hardware support).
- Exhibits stack property (no Belady's Anomaly).
Other algorithms include LRU Approximation, Second Chance, and LFU/MFU, with varying performance and implementation complexity.
Thrashing
- Definition: Occurs when a process spends more time swapping pages in/out than executing, causing CPU idleness and system performance degradation.
Solutions to Thrashing:
- Reduce multiprogramming degree.
- Use Page Fault Frequency Control to allocate or reallocate frames based on page fault frequency.
- Employ the Working Set Model to predict and allocate the necessary number of frames for each process.