Upload presentasi
Presentasi sedang didownload. Silahkan tunggu
1
Departemen Ilmu Komputer IPB
Komputasi Paralel Kuliah 01: Pendahuluan Yeni Herdiyeni Departemen Ilmu Komputer IPB Semester Genap 2010
2
Deskripsi Membahas kebutuhan dan klasifikasi mesin paralel (SISD, SIMD, MISD, MIMD, SPMD), komunikasi antar prosesor, memori persekutuan (shared memory), pengiriman pesan (message passing), jaringan interkoneksi (interconnection network), Desain algoritma paralel, efisiensi dan percepatan pemrosesan paralel, dan contoh aplikasi pemprosesan paralel. Perangkat lunak yang digunakan : MPI (Message Passing Interface)
3
Pengajar Dr. Yeni Herdiyeni, S.Si, M.Komp Hendra Rahmawan, S.Si, M.T
Endang Purnama, S.Si, M.Komp Komponen Penilaian UTS UAS Tugas Quiz Project
4
Materi Kuliah Pendahuluan Definisi dan motivasi pemrosesan paralel
Arsitektur system, Shared memory multiprocessor system, Message passing multicomputer distributed, Shared memory dan klasifikasi memori persekutuan MIMD dan SIMD Topologi Network Paradigma pengiriman pesan dengan menggunakan MPI Prinsip-prinsip Desain Algoritme Paralel Analisis kinerja Pemrosesan paralel UTS
5
Materi Kuliah #2 Pemrograman Paralel : Distributed Memory
Tinjauan ulang critical section dengan menggunakan Pthread, siknronisasi dengan Semaphore, Implementasi Semaphore dilingkungan MPI Sorting Dense Matrix Algorithm Aplikasi pemrosesan (shared memory): problema produsen-konsumen, problema writer reader, problema dining philosophy Presentasi/diskusi proyek UAS Materi Kuliah dapat diakses di
6
Buku Ajar Grama, Ananth., Gupta, Anshul., Karypis, George., Kumar, Vipin Introduction to Parallel Computing. Second Edition. Pearson Addision Wesley. Quinn, Michael J Parallel Programming in C with MPI and OpenMP . International Edition, McGraw-Hill. Wilkinson, Barry & Allen, Michael Parallel Programming . 2nd Edition,Pearson Educational International. Jordan, Harry F., Alaghband Gita Fundamentals of Parallel Processing. Prentice Hall.
7
Motivation : Classical Science
Nature Observation Physical Experimentation Theory
8
Modern Scientific Method
Nature Observation Numerical Simulation Physical Experimentation Theory
9
Modern Parallel Architectures
Two basic architectural scheme: Distributed Memory Shared Memory Now most computers have a mixed architecture
10
What is Parallel and Distributed computing?
Solving a single problem faster using multiple CPUs Parallel = Shared Memory among all CPUs Distributed = Local Memory/CPU Common Issues: Partition, Synchronization, Dependencies
11
Distributed Memory NETWORK CPU CPU CPU CPU CPU CPU memory memory node
12
Shared Memory memory CPU CPU CPU CPU CPU
13
Seeking Concurrency Data dependence graphs Data parallelism
Functional parallelism Task Parallelism Pipelining
14
Interconnection Networks
Uses of interconnection networks Connect processors to shared memory Connect processors to each other Interconnection media types Shared medium Switched medium
15
Most Common Networks switched switch Cube, hypercube, n-cube
Torus in 1,2,...,N Dim Fat Tree
16
Real Shared Memory banks System Bus CPU CPU CPU CPU CPU
17
Virtual Shared Network CPU CPU CPU CPU CPU CPU node node node node
memory memory memory memory memory memory HUB HUB HUB HUB HUB HUB CPU CPU CPU CPU CPU CPU node node node node node node
18
Mixed Architectures NETWORK CPU CPU CPU CPU CPU CPU memory memory
node CPU CPU CPU CPU node node NETWORK
19
General MPI Program Structure
MPI include file #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } variable declarations #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } Terminate MPI Environment #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } Initialize MPI environment #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } Do work and make message passing calls #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); }
20
The Message-Passing Programming Paradigm
21
Message
22
Foster’s Design Methodology
Partitioning Communication Agglomeration Mapping
23
Foster’s Methodology
24
Example program (1) Calculating the value of by:
25
=3.141... Start calculation! OK! OK! Calculated by process 0
26
Sequential Algorithm 19 1 3 4 2 3 1 2 1 4 3 = 14 1 3 4 2 4 1 3 1 9 2
4 3 = 14 1 3 4 2 4 1 3 1 9 2 1 4 3 2 1 11 1 3 4 2 1 3 5 5 4 9 4 1 3 9 2 3 13 3 13 1 4 14 1 17 1 4 11 2 4 19 2 1 11 1
27
Phases of Parallel Algorithm
b Row i of A b ci Inner product computation Row i of A Row i of A b c All-gather communication
28
Contoh 4x0 +6x1 +2x2 – 2x3 = 8 2x0 +5x2 4 –4x0 – 3x1 – 5x2 +4x3 1 8x0
40
29
Partitioning P0 P1
30
Communication P0 P1
31
Communication (CONT..) P0 P1
32
Communication (cont..) P0 P1
33
Communication (cont..) P0 P1
34
Shared memory multiprocessor
using a single bus
35
Process “A program in a run”, a program in the memory.
A high level view of a UNIX process
36
Threads A stream of control in a process.
A high level view of threads in a UNIX process
37
Parallel Bubble Sort Iteration could start before previous iteration finished if does not overtake previous bubbling action: Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M Pearson Education Inc. All rights reserved.
38
Virtual Topology
Presentasi serupa
© 2024 SlidePlayer.info Inc.
All rights reserved.