Presentasi sedang didownload. Silahkan tunggu

Presentasi sedang didownload. Silahkan tunggu

Chapter 8: Memory Management

Presentasi serupa


Presentasi berjudul: "Chapter 8: Memory Management"— Transcript presentasi:

1 Chapter 8: Memory Management

2 Chapter 8: Memory Management
Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

3 Background Program harus dibawa ke dalam memori dan ditempatkan dalam proses untuk dijalankan Input queue(Antrian masuk) – kumpulan proses pada disk yang menunggu untuk dibawa ke memori untuk menjalankan program Program user melalui beberapa langkah sebelum dijalankan

4 Mapping Program Address Space to Memory Address Space

5 Binding of Instructions and Data to Memory
Alamat pengikatan dari instruksi dan data ke alamat memori dapat terjadi pada 3 tahap yang berbeda. Compile time: Jika lokasi memori diketahui sejak awal, kode absolut dapat dibangkitkan, apabila terjadi perubahan alamat awal harus dilakukan kompilasi ulang. Misalnya : program format .com pada MS-DOS adalah kode absolut yang diikat pada saat waktu kompilasi Load time: Harus membangkitkan kode relokasi jika lokasi memori tidak diketahui pada saat waktu kompilasi. Execution time: Pengikatan ditunda sampai waktu eksekusi jika proses dapat dipindahkan selama eksekusi dari satu segmen memori ke segmen memori lain). Memerlukan dukungan perangkat keras untuk memetakan alamat (misalnya register basis dan limit). User programs typically refer to memory addresses with symbolic names such as "i", "count", and "averageTemperature". These symbolic names must be mapped or bound to physical memory addresses, which typically occurs in several stages: Addresses in the source program are generally symbolic (such as count ). A compiler will typically bind these symbolic addresses to relocatable addresses (such as "14 bytes from the beginning of this module"). The linkage editor or loader will in turn bind the relocatable addresses to absolute addresses (such as 74014). Linking is the process of taking some smaller executables and joining them together as a single larger executable. Loading is loading the executable into memory prior to execution.

6 Multistep Processing of a User Program
The process of associating program instructions and data to physical memory addresses is called address binding, or relocation. A user program will go through several steps -some of which may be optional-before being executed (see Fig. 8.3). Addresses may be represented in different ways during these steps. Addresses in the source program are generally symbolic (such as ). A compiler will typically bind these symbolic addresses to relocatable addresses (such as "14 bytes from the beginning of this module"). The linkage editor or loader will in turn bind the relocatable addresses to absolute addresses (such as 74014). Each binding is a mapping from one address space to another. Classically, the binding of instructions and data to memory addresses can be done at any step along the way: Compile time. The compiler translates symbolic addresses to absolute addresses. If you know at compile time where the process will reside in memory, then absolute code can be generated (Static). Load time. The compiler translates symbolic addresses to relative (relocatable) addresses. The loader translates these to absolute addresses. If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code (Static). Execution time. If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. The absolute addresses are generated by hardware. Most general-purpose OSs use this method (Dynamic). Static-new locations are determined before execution. Dynamic-new locations are determined during execution.

7 Simple program Sss How to bind with physical address ?

8 Binding Logical to Physical
xxx Stw = store

9 Binding Logical to physical
vvv

10 Binding logical to physical
vv

11 Logical vs. Physical Address Space
Alamat yang dibangkitkan oleh CPU disebut alamat logika (logical address) dimana alamat terlihat sebagai uni memory yang disebut alamat fisik (physical address). Tujuan utama manajemen memori adalah konsep meletakkan ruang alamat logika ke ruang alamat fisik. Logical address –Himpunan dari semua alamat logika yang dibangkitkan oleh program disebut dengan ruang alamat logika (logical address space) Physical address – himpunan dari semua alamat fisik yang berhubungan dengan alamat logika disebut dengan ruang alamat fisik (physical address space). Hasil skema waktu kompilasi dan waktu pengikatan alamat pada alamat logika dan alamat memori adalah sama. Tetapi hasil skema waktu pengikatan alamat waktu eksekusi berbeda. dalam hal ini, alamat logika disebut dengan alamat maya (virtual address).

12 Memory-Management Unit (MMU)
Hardware perangkat keras yang memetakan alamat virtual ke alamat fisik. Pada skema MMU, nilai register relokasi ditambahkan ke setiap alamat yang dibangkitkan oleh proses user pada waktu dikirim ke memori. Nilai dari register relokasi ditambahkan ke setiap alamat yang dibangkitkan oleh proses user pada waktu dikirim ke memori User program tidak pernah melihat alamat fisik secara real. Tujuannya untuk melakukan mapping physical address ke logical address dan sebaliknya.

13 Dynamic relocation using a relocation register
Logical Versus Physical Address Space An address generated by the CPU is commonly referred to as a logical address, whereas an address seen by the memory unit -that is, the one loaded into the memory-address register of However the execution-time address-binding scheme results in differing logical and physical addresses. In this case, we usually refer to the logical address as a virtual address. The run-time mapping from virtual to physical addresses is done by a hardware device called the memory-management unit (MMU). Figure 8.4: Dynamic relocation using a relocation register. For the time being, we illustrate this mapping with a simple MMU scheme, which is a generalization of the base-register scheme (see Fig. 8.4)). The base register is now called a relocation register. The value in the relocation register is added to every address generated by a user process at the time it is sent to memory For example, if the base is at 14000, then an attempt by the user to address location 0 is dynamically relocated to location 14000; an access to location 346 is mapped to location The user program never sees the real physical addresses. The program can create a pointer to location 346, store it in memory, manipulate it, and compare it with other addresses -all as the number 346. The user program deals with logical addresses. The memory-mapping hardware converts logical addresses into physical addresses. The concept of a logical address space that is bound to a separate physical address space is central to proper memory management.

14 Dynamic Loading Untuk memperoleh utilitas ruang memori, dapat menggunakan dynamic loading. Dengan dynamic loading, sebuah rutin tidak disimpan di memori sampai dipanggil. Semua rutin disimpan pada disk dalam format relocatable load. Penggunaan memori ruang yang lebih baik; routine yang tidak digunakan maka tidak akan pernah diproses Sangat berguna ketika dibutuhkan kode dalam jumlah besar untuk menghandle kasus yang jarang terjadi Tidak ada dukungan khusus dari operating , diperlukan implementasi melalui rancangan program. Dynamic Loading In our discussion so far, the entire program and all data of a process must be in physical memory for the process to execute. To obtain better memory-space utilization, we can use dynamic loading. With dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in a relocatable load format. The main program is loaded into memory and is executed. When a routine needs to call another routine, the calling routine first checks to see whether the other routine has been loaded. If not, the relocatable linking loader is called to load the desired routine into memory and to update the program's address tables to reflect this change. Then control is passed to the newly loaded routine. The advantage of dynamic loading is that an unused routine is never loaded. Dynamic loading does not require special support from the OS. Operating systems may help the programmer, however, by providing library routines to implement dynamic loading.

15 Dynamic Linking Konsep dynamic linking sama dengan dynamic loading. Pada saat loading, linking ditunda sampai waktu eksekusi Terdapat kode kecil yang disebut stub digunakan untuk meletakkan rutin library di memori dengan tepat. Stub diisi dengan alamat rutin dan mengeksekusi rutin. biasanya digunakan dengan sistem library, seperti language subroutine library. Tanpa fasilitas ini, semua program pada sistem perlu mempunyai copy dari library language di dalam executable image Operating system dibutuhkan untuk mengecek jika routine berada dalam proses alamat memori. Ex. bila proses-proses di memori utama saling diproteksi, maka sistem operasi melakukan pengecekan apakah rutin yang diminta berada diluar ruang alamat. Beberapa proses diijinkan untuk mengakses memori pada alamat yang sama. Dynamic Linking and Shared Libraries Figure 8.3 also shows dynamically linked libraries. The concept of dynamic linking is similar to that of dynamic loading. Here, though, linking, rather than loading, is postponed until execution time. With dynamic linking, a stub is included in the image for each library-routine reference. The stub is a small piece of code that indicates how to locate the appropriate memory-resident library routine or how to load the library if the routine is not already present. When the stub is executed, it checks to see whether the needed routine is already in memory. If not, the program loads the routine into memory. This feature can be extended to library updates (such as bug fixes). A library may be replaced by a new version, and all programs that reference the library will automatically use the new version.

16 Swapping Proses dapat ditukarkan sementara keluar dari memory backing store, dan kemudian dibawa kembali ke memory untuk melanjutkan eksekusi Backing store – disk besar dengan kecepatan tinggi yang cukup untuk meletakkan copy dari semua memory image untuk semua user, sistem juga harus menyediakan akses langsung ke memory image tersebut. Roll out, roll in – swapping varian digunakan untuk algoritma penjadwalan berbasis prioritas; proses yang prioritas lebih rendah ditukar sehingga proses yang prioritas tinggi dapat di load dan dieksekusi Kebijakan penukaran juga dapat digunakan pada algoritma penjadwalan berbasis prioritas. Jika proses mempunyai prioritas lebih tinggi datang dan meminta layanan, memori akan swap out proses dengan prioritas lebih rendah sehingga proses dengan prioritas lebih tinggi dapat di-load dan dieksekusi. Umumnya sebuah proses yang di-swap out akan menukar kembali ke ruang memori yang sama dengan sebelumnya.

17 Schematic View of Swapping

18 Contiguous Allocation
Memori Utama biasanya dibagi menjadi partisi: Resident operating system, biasanya dilakukan di memori rendah dengan interrupt vector Pengguna proses kemudian dilakukan di memory tinggi Single -partition allocation Relocation-register scheme digunakan untuk memproteksi pengguna proses dari satu ke lainnya dan perubahan data dan kode operating system Relocation register terdiri dari alamat fisik terkecil; batas register berisi range dari logical addresses ; tiap logical address harus lebih rendah dari limit register

19 A base and a limit register define a logical address space

20 Proteksi HW address dengan base dan limit registers

21 Contiguous Allocation (Cont.)
Multiple-partition allocation Hole – blok memori tersedia; holes dari berbagai ukuran tersebar di sepanjang memori Ketika sebuah proses tiba, ia akan dialokasikan memori dari hole yang cukup besar untuk mengakomodasikannya Sistem operasi memaintance informasi tentang : a) alokasi partisi b) partisi kosong (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

22 Dynamic Storage-Allocation Problem
Bagaimana untuk memenuhi permintaan sebuah ukuran dari daftar hole bebas First-fit: Mengalokasikan hole pertama yang cukup besar Best-fit: Mengalokasikan hole terkecil yang cukup besar; harus mencari ke seluruh alokasi memory. Menghasilkan sisa hole yang kecil. Worst-fit: Mengalokasikan hole terbesar; juga harus mencari ke seluruh alokasi memory. Produces the largest leftover hole. First-fit dan best-fit lebih baik daripada worst-fit dalam hal kecepatan dan storage utilization

23 Fragmentation External Fragmentation – total memori kosong ada untuk memenuhi permintaan, tetapi tidak berdekatan Internal Fragmentation – memori yang dialokasikan bisa lebih besar daripada memori yang direquest; perbedaan ukuran ini adalah memory internal yang di partisi, tetapi tidak digunakan Mengurangi external fragmentation dengan compaction(pemampatan) Mengacak isi memori untuk menempatkan semua memori bebas bersama di satu blok besar Pemampatan memori dimungkinkan jika relokasi yang disajikan bersifat dinamik, dan ini dilakukan pada waktu eksekusi I/O problem Latch job di memory selama terlibat dalam I/O Melakukan I/O saja didalam OS buffers

24 External dan Internal Fragmentation
A:18k, B: 26 k Eksternal segmentat

25 Paging Logical address space dari proses dapat bersifat noncontiguous; process mengalokasikan physical memory setiap kali the latter tersedia Pembagian physical memory menjadi fixed-sized blocks dinamakan frames (size adalah pangkat 2, diantara 512 bytes dan 8192 bytes) Pembagian logical memory menjadi blok yang sama besar dinamakan pages. Tetap mengikuti semua frames bebas Untuk menjalankan program dengan ukuran n pages, butuh untuk mencari n frames bebas dan load program Mengeset page table untuk mentranslate logical ke physical addresses Internal fragmentation Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. Paging avoids the considerable problem of fitting memory chunks of varying sizes onto the backing store. The backing store also has the fragmentation problems discussed in connection with main memory, except that access is much slower, so compaction is impossible. Because of its advantages over earlier methods, paging in its various forms is commonly used in most OSs. Traditionally, support for paging has been handled by hardware. However, recent designs have implemented paging by closely integrating the hardware and OS, especially on 64-bit microprocessors.

26 Address Translation Scheme
Address yang dihasilkan dari CPU terbagi menjadi: Page number (p) – digunakan sebagai index ke page table yang mengandung base address di setiap page di physical memory Page offset (d) – dikombinasikan dengan base address untuk mendefinisikan physical memory address yang mengirim ke memory unit The basic method for implementing paging involves breaking physical memory into fixed-sized blocks called frames breaking logical memory into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames from the backing store. The backing store is divided into fixed-sized blocks that are of the same size as the memory frames.

27 Address Translation Architecture
The hardware support for paging is illustrated in Fig Every address generated by the CPU is divided into two parts: a page number () and a page offset (). The page number is used as an index into a page table. The page table contains the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit. The paging model of memory is shown in Fig. 8.8.

28 Contoh Paging

29 Contoh Paging 1 . 5 6

30 Implementasi dari Page Table
Page table tersimpan di main memory Page-table base register (PTBR) menunjuk ke page table -> p Page-table length register (PRLR) mengindikasikan ukuran dari page table ->d Dalam skema ini semua data/instruksi akses membutuhkan dua memori akses. Satu untuk page table dan satu untuk data/instruksi. Dua masalah memory access dapat diselesaikan dengan menggunakan special fast-lookup hardware cache yang dinamakan associative memory atau translation look-aside buffers (TLBs)

31 Associative Memory Associative memory – pencarian parlel
Address translation (A´, A´´) jika A´ di dalam associative register, frame # yang keluar Sebaliknya, mendapatkan frame # dari page table di memori Page # Frame # Associative memory, also known as content addressable memory, is very high speed, expensive memory that, unlike regular memory, is queried by the contents of a key field versus an address-based lookup. Within the memory management hardware, associative memory is used to construct the translation lookaside buffer, or TLB. Each entry in the TLB contains the information that would be found in a page table as well as the page number (which is used as the query key). Every entry in the TLB is compared with the key (the page number that we are looking up) simultaneously. This approach is known as associative mapping. Associative memory is fast but also expensive (more transistors and more chip real estate). For it to work well, it has to be co-located in the processor’s memory management unit. Given how big page tables can get and that we need a page table per process, the the TLB will not be used to store entire page tables but rather act as a cache for frequently-accessed page table entries instead. This is a combined associative and direct mapped approach to page translation and is used by practically every processor being made today. The hope with the TLB is that a large percentage of the pages we seek will be found in the TLB. The percent of references that can be satisfied with a TLB lookup is called the hit ratio. A hit ratio close to 100% means that almost every memory reference is translated by associative memory. Ideally, the TLB will be designed to be fast enough to allow the CPU to do the logical-to-physical memory translation as well as the corresponding real memory read/write operation in one memory access so that the MMU does not create a performance penalty. Unlike normal cache memory, the TLB is built into the MMU and caches individual table entries rather than a set of arbitrary memory locations. If a page is not in the TLB, then the MMU detects a cache miss and performs an ordinary page lookup (referencing the memory-resident page table). In doing so, one entry is removed from the TLB and replaced by the entry that was just looked up so that the next reference to that same page will result in a TLB hit.

32 Paging Hardware dengan TLB

33 Effective Access Time Associative Lookup =  waktu unit
Anggap memory cycle time adalah 1 microsecond Hit ratio – persentasi waktu dimana page number ditemukan di dalam associative registers; ratio berkaitan dengan nomor associative registers Hit ratio =  Effective Access Time (EAT) EAT = (1 + )  + (2 + )(1 – ) = 2 +  – 

34 Proteksi Memori Proteksi memori diimplementasikan dari associating protection bit setiap frame nya Valid-invalid bit terpasang di tiap entry dalam page table: “valid” mengindikasikan bahwa associated page itu di dalam process’ logical address space, dan demikian juga dengan legal page “invalid” mengindikasikan bahwa page tidak didalam process’ logical address space

35 Valid (v) or Invalid (i) Bit dalam Page Table

36 Page Table Structure Hierarchical Paging Hashed Page Tables
Inverted Page Tables

37 Hierarchical Page Tables
Break up the logical address space into multiple page tables A simple technique is a two-level page table

38 Two-Level Paging Example
A logical address (on 32-bit machine with 4K page size) is divided into: a page number consisting of 20 bits a page offset consisting of 12 bits Since the page table is paged, the page number is further divided into: a 10-bit page number a 10-bit page offset Thus, a logical address is as follows: where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table page number page offset pi p2 d 10 10 12

39 Two-Level Page-Table Scheme

40 Address-Translation Scheme
Address-translation scheme for a two-level 32-bit paging architecture

41 Hashed Page Tables Common in address spaces > 32 bits
The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted.

42 Hashed Page Table

43 Inverted Page Table One entry for each real page of memory
Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs Use hash table to limit the search to one — or at most a few — page-table entries

44 Inverted Page Table Architecture

45 Shared Pages Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). Shared code must appear in same location in the logical address space of all processes Private code and data Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere in the logical address space

46 Shared Pages Example

47 Segmentation Memory-management scheme that supports user view of memory A program is a collection of segments. A segment is a logical unit such as: main program, procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays

48 User’s View of a Program

49 Logical View of Segmentation
1 4 2 3 1 2 3 4 user space physical memory space

50 Segmentation Architecture
Logical address consists of a two tuple: <segment-number, offset>, Segment table – maps two-dimensional physical addresses; each table entry has: base – contains the starting physical address where the segments reside in memory limit – specifies the length of the segment Segment-table base register (STBR) points to the segment table’s location in memory Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR

51 Segmentation Architecture (Cont.)
Relocation. dynamic by segment table Sharing. shared segments same segment number Allocation. first fit/best fit external fragmentation

52 Segmentation Architecture (Cont.)
Protection. With each entry in segment table associate: validation bit = 0  illegal segment read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level Since segments vary in length, memory allocation is a dynamic storage-allocation problem A segmentation example is shown in the following diagram

53 Address Translation Architecture

54 Example of Segmentation

55 Sharing of Segments

56 Segmentation with Paging – MULTICS
The MULTICS system solved problems of external fragmentation and lengthy search times by paging the segments Solution differs from pure segmentation in that the segment-table entry contains not the base address of the segment, but rather the base address of a page table for this segment

57 MULTICS Address Translation Scheme

58 Segmentation with Paging – Intel 386
As shown in the following diagram, the Intel 386 uses segmentation with paging for memory management with a two-level paging scheme

59 Intel 30386 Address Translation

60 Linux on Intel 80x86 Uses minimal segmentation to keep memory management implementation more portable Uses 6 segments: Kernel code Kernel data User code (shared by all user processes, using logical addresses) User data (likewise shared) Task-state (per-process hardware context) LDT Uses 2 protection levels: Kernel mode User mode

61 End of Chapter 8

62

63 Free Frames Before allocation After allocation

64 http://thumbsup2life. blogspot

65 Best Fit Best-fit memory allocation makes the best use of memory space but slower in making allocation. In the illustration below, on the first processing cycle, jobs 1 to 5 are submitted and be processed first. After the first  cycle, job 2 and 4 located on block 5 and block 3 respectively and both having one turnaround are replace by job 6 and 7 while  job 1, job 3 and job 5 remain on their designated block. In the third cycle, job 1 remain on block 4, while job 8 and job 9 replace job 7 and job 5 respectively (both having 2 turnaround). On the next cycle, job 9 and job 8 remain on their block while job 10 replace job 1 (having 3 turnaround). On the fifth cycle only job 9 and 10 are the remaining jobs to be process and there are 3 free memory blocks  for the incoming jobs. But since there are only 10 jobs, so it will remain free. On the sixth cycle, job 10 is the only remaining job to be process and finally on the seventh cycle, all jobs are successfully process and executed and all the memory blocks are now free.   

66 First Fit

67 Worst Fit


Download ppt "Chapter 8: Memory Management"

Presentasi serupa


Iklan oleh Google