Lecture notes for CSC 256/456, 2-7-2000 ff read chapter 8 of text, linking sections from PLP expect MM assignment on Wednesday ================================================ Memory Management Techniques ---------------------------- Motivation Protection Given multiprogramming, memory must be shared among many processes and each process must be protected from the effects of others. The kernel must also be protected from users. Flexible sharing A process should be able to execute using whatever memory is currently available. This requires some mechanism for dynamic relocation, since static relocation requires a process to always execute at the same address. Caching and virtualization Some processes will require more memory than is available on the machine, although at any one point in time, only a small amount of the memory is in use. Such processes cannot be executed without some scheme that allows the process to run without having its entire memory needs met at one time. Physical vs Virtual Addresses Physical addresses refer to physical (hardware) memory. It typically begins with address 0 and runs from less than a megabyte on very small embedded devices to many gigabytes on the largest supercomputers. Virtual addresses refer to locations in a process's view of memory, which need not correspond directly to physical memory. The size of a virtual address space is limited by the size of addresses (n bits can specify an address space of size 2^n). The most recent machines have an architectural limit of 2^64, but implementations tend to implement less than that (say 2^40 == 1TB). With hardware support, virtual and physical addresses can be independent. It is common to have virtual address spaces much larger than the physical address space (so we can run very large programs in a small amount of memory) and even possible to have physical address spaces that are larger than a virtual address space (e.g. on the Cray T3D). Progressive development of simple ideas Bare machine: no services, maximum flexibility no hardware support necessary Monoprogramming: one process in memory at a time runs til completion hardware support of a "fence" for OS protection Overlays (for caching): user implements memory management no hardware support necessary Swapping: one process is in memory at a time, but it need not run til completion process is "swapped out" to disk after some quantum (or after blocking) new process is "swapped in" to memory to begin execution context switch time is now increased by latency + transfer time typical disk latency hasn't improved much over the years: say 5 ms (dominated by seek) transfer speeds have improved significantly, though not nearly as much as processor speeds: say 12 MB/s (determined by density and rotational speed) To move a 1MB program takes 5ms + 1/12s = 90ms no hardware support (other than a disk) necessary NB: swapping can still be important to keep a machine from *thrashing* -- spending all its time moving pages to and from disk ------------------------------- Partitions Simplest approach that lets you have several processes in memory at once, with protection firewalls between them. Two hardware alternatives: high and low limits for running process forces static relocation base and limit allows dynamic relocation, but requires add on every access Boundaries between partitions can be set at system init time, or varied dynamically. Same HW for both options; only the OS is different. Dynamic partitions make a lot better use of the memory, but require some sort of dynamic space management -- free lists, buddy systems, first fit, best fit -- all the stuff you learned (or were supposed to :-) in a data structures class. Partitions suffer from fragmentation. Internal fragmentation is wasted space inside allocated partitions. It happens with fixed-size partitions when partition sizes don't match program sizes, and with variable-size partitions when users over-estimate program size in order to make sure they won't run out of space when allocating memory on the fly. External fragmentation is wasted space outside allocated partitions. It happens when the total available space is large enough to accommodate some runnable process, but the space is divided into non-contiguous chunks, none of which is large enough for any runnable process. You can cope with external fragmentation by compacting (scooting everything down to one end of memory), but - that's expensive (copying takes a lot of time; figure the bus can move maybe 500MB/s). - requires dynamic relocation (i.e. base/bound registers rather than limit registers), since programs that have been running for a while are full of pointers that depend on things being at certain addresses, and most languages don't provide the hooks you need to find and modify all the pointers. Two major techniques have evolved to address the fragmentation problem: paging and segmentation. Paging mainly attacks external fragmentation. Segmentation mainly attacks internal fragmentation. Most real systems these days use a hybrid of the two, either paged segmentation or segmented paging. ------------------------------- Paging Paging attacks external fragmentation by getting around the requirement that physical memory be allocated in contiguous, variable-size chunks. Instead we divide the virtual address space of a process into smaller, equal-size chunks called pages, and divide the physical address space of the machine into equal-size chunks called page frames, of the same size as the pages. We then map from pages to frames in such a way that the nice linear virtual address space can be composed of physical frames that are scattered all over the place. Address translation is performed by the memory management unit. A virtual address is divided into a page/offset pair. The association between pages and frames is made by the page table. The page table is a dictionary data structure that maps page numbers to information about the page, including the physical frame number and protection attributes (readable and/or writable in kernel and/or user mode). The offset is used to find the specific location in the frame. Dynamic relocation is implicit in this scheme. Properties Virtual addresses and physical addresses are independent. A contiguous virtual address space can be implemented without using contiguous physical addresses (no external fragmentation). As described, we still require N physical frames to satisfy a request for N pages. However, the physical frames can be anywhere in memory. We will shortly see that paging allows the implementation of *virtual memory*, in which not all N pages need to be in memory at once in order for the process to run. Pure segmentation, as it turns out, does not allow us to use virtual memory. The hybrid approaches (paged segmentation and segmented paging) do allow it. Paging solves the external fragmentation problem. Serious internal fragmentation can still occur if processes over-estimate their memory needs in order to ensure that some structure (e.g. the heap) in the middle of their virtual address space can grow without bumping into anything else. Segmentation (and segmented paging) will take care of this. Virtual memory also gets rid of the worst of it. Protection and sharing can take place on a page by page basis. Implicit relocation makes it easy to do swapping (which you may still want in order to prevent thrashing). Page table organization and access. The memory management unit (MMU) uses the page table to do address translation. In recent machines, the MMU tends to be on the same chip as the processor. It may also be a separate chip or chips. The page table can be stored in main memory, but looking things up in such a table on every memory access slows all loads and stores (including instruction fetches!) by a factor of two or more, which is generally unacceptable. If the page table isn't very big (it is in most modern machines), it can be kept in hardware registers, where it becomes part of the context for a process (slows context switch, but provides efficient translation). (PDP/11 worked this way). Almost all modern machines keep the most active translations in an associative set of registers known as the translation lookaside buffer (TLB) (sometimes called an address translation cache (ATC)). So long as a program has reasonable locality, most translations will be a "TLB hit", using a page table entry already in the TLB. The TLB must either be re-loaded (or at least purged) on a context switch, or else its entries must be tagged with the id of the address space for which they are valid (tags are common but not universal on modern machines) Now what happens on a TLB miss? We have to look things up in the full page table. In most machines hardware does the lookup (and hence defines the format of the table). In some machines (e.g. MIPS and Alpha processors), HW traps to the OS on a TLB miss, and reloading (and the format of tables) is entirely up to the OS. The kernel must generally augment hardware-defined page tables, if any, with additional data structures that describe things like which processes are sharing the frame. Page tables can even be paged! (easy w/ SW TLB reload, possible with multi-level (tree-structured) HW page tables) All of the traditional implementations of dictionaries are options for page table organization. The PDP-11 used a simple array (implemented in hardware), indexed by page number. The VAX had three simple arrays, one for the kernel, one for user text and data, one for user stack. The Sparc and MC680x0 use multi-way search trees. The PowerPC and PA-RISC architectures use hash tables (called "inverted page tables"). Mach uses a very nice linked-list organization in the machine-independent portion of its VM system. We'll examine several of these examples later as case studies. ------------------------------- Segmentation Internal fragmentation can still be a problem with paging, because of the need to over-estimate space needs for data structures like stack and heap that change size on the fly, to make sure they won't bump into each other. Of course, if you're doing virtual memory (see next set of notes), you don't have to waste physical page frames on parts of the address space that aren't being used, but you still have to worry about the space you need to store the translation information. Segmentation gets rid of internal fragmentation by abandoning the notion of a contiguous linear address space. Instead, the user can think of memory as an unordered collection of segments of variable size (eg, code segment for proc P, code segment for proc Q, data segment for array A, heap, stack, mapped file F, etc). In a pure segmentation scheme, these segments are managed the way whole processes were managed in a partitioned machine. In a partitioned machine, the hardware has a base/bound pair of special hardware registers that are used for address translation and protection. In a segmented machine, the hardware has several base/bound pairs. Typically, each program has a (potentially large) *segment table* containing base/bound values for all segments to which the program is permitted access. The hardware then provides some way, visible in the assembly-level instruction set, to specify which segment (base/bound pair) should be used for a given memory access. On the i286, for example, there are 6 *segment registers* that can be used to identify segment table entries. The segment tables (one per process) are in main memory, but there is also a hardware base/bound register pair, not visible to the user, associated with each segment register. When the user loads a segment register with a segment table index, the hardware loads the corresponding base/bound pair from the appropriate segment table entry. When the user executes an instruction that has an operand in memory, the addressing mode resolves to a segment register id (e.g. a number between 1 and 6 on the i286) and a segment offset. The segment register id identifies a base/bound pair, which is used to turn the offset into a physical address. Relocation is implicit. Memory allocation for segments is (as with partitions) a dynamic memory allocation problem which can lead to external fragmentation. Paged segmentation addresses this. A natural use of segmentation (as in Multics, where every file ever created has its own segment) leads to very large segment tables. Segment tables can be hashed, cached, or paged. ------------------------------- Combining Paging and Segmentation Segmented Paging In a pure paging scheme, the user thinks in terms of a contiguous linear address space, and internal fragmentation is a problem when portions of that address space include conservatively large estimates of the size of dynamic structures. Segmented paging is a scheme in which (1) logically distinct (e.g. dynamically-sized) portions of the address space are deliberately given virtual addresses a LONG way apart, so we never have to worry about things bumping into each other, and (2) the page table is implemented in such a way that the big unused sections don't cost us much. Basically the only page table organization that doesn't work well with segmented paging is a single linear array. Trees, inverted (hash) tables, and linked lists work fine. If we always start segments at multiples of some large power of two, we can think of the high-order bits of the virtual address as specifying a segment, and the low-order bits as specifying an offset within the segment. If we have tree-structured page tables in which we use k bits to select a child of the root, and we always start segments at some multiple of 2^(wordsize-k), then the top level of the tree looks very much like a segment table. The only difference is that its entries are selected by the high-order address bits, rather than by some explicit architecturally-visible mechanism like segment registers. Basically all modern operating systems on page-based machines use segmented paging. Paged Segmentation In a pure segmentation scheme, we still have to do dynamic space management to allocate physical memory to segments. This leads to external fragmentation and forces us to think about things like compaction. It also doesn't lend itself to virtual memory (see next set of notes). To address these problems, we can page the segments of a segmented machine. This is paged segmentation. Multics did it back in the '60s. The i386 (and 486 and Pentium) does it today. Instead of containing base/bound pairs, the segment table entries of a machine with paged segmentation indicate how to find the page table for the segment. In Multics, there was a a separate page table for each segment. The segment offset was interpreted as consisting of a page number and a page offset. On the i386, there is a single page table for each process (address space). The base address in the segment table entry is added to the segment offset to produce a "linear address" that is then partitioned into a page number and page offset, and looked up in the page table in the normal way. Note that in a machine with pure segmentation, given a fast way to find base/bound pairs (e.g. segment registers), there is no need for a TLB. Once you go to paged segmentation, you need a TLB. The difference between segmented paging and paged segmentation lies in the user's programming model, and in the addressing modes of the CPU. On a segmented architecture, the user generally specifies addresses using an effective address that includes a segment register specification. On a paged architecture, there are no segment registers. In practical terms, managing segment registers (loading them with appropriate values at appropriate times) is a bit of a nuisance to the assembly language programmer or compiler writer. On the other hand, since it only takes a few bits to indicate a segment register, while the base address in the segment table entry can have many bits, segments provide a means of expanding the virtual address space beyond 2^(wordsize). We can't do segmented paging on a machine with 16-bit addresses. It's beginning to get problematical on machines with 32-bit addresses. We certainly can't build a Multics-style single level store, in which every file is a segment that can be accessed with ordinary loads and stores, on a 32-bit machine. Segmented architectures provide a way to get the effect we want (lots of logically separate segments that can grow without practical bound) without requiring that we buy into very large addresses. As 64-bit architectures become more common, it is possible that segmentation will become less popular. Time will tell. One might think that paged segmentation has an additional advantage over segmented paging: protection information is logically associated with a segment, and could perhaps be specified in the segment table and then left out of the page table. Unfortunately, protection bits are used for lots of purposes other than simply making sure you can't write your code or execute your data. We'll see several such uses this semester. ------------------------------- Case study The i386 Global segment table for the OS, etc. Local segment table for the user program. Up to 16K different segments. Six segment registers. Segment register contains a "selector" which references a segment table entry. When segment register is loaded, hardware loads a hidden extension of the register with the contents of the corresponding element of the segment table. Virtual addresses specify segment registers. Paging of segments can be turned on and off. If off, we have pure segmentation, and the OS does dynamic memory allocation. If on, there is a single two-level page table pointed to by a (process-specific) hardware pointer. There are NOT separate page tables for each segment. This means you can put the same value in all the segment registers and have pure paging. So the machine is really a hybrid, capable of either extreme or things in-between. The i286 has only pure segmentation.