Introduction To Parallel Programming

2y ago
10 Views
2 Downloads
1.24 MB
74 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Audrey Hope
Transcription

Introduction to ParallelProgrammingHaitao WeiUniversity of Delawarehttp://www.udel.eduComputer Architecture andParallel Systems Laboratoryhttp://www.capsl.udel.edu

Outline Part1:Introduction to parallel programming Part2:Parallel Programming Tutorials MPI Pthreads OpenMP09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial2

Outline Models for Parallel Systems Parallelization of Programs Levels of Parallelism Instruction levelData ParallelismLoop ParallelismFunctional/task Parallelism Parallel Programming Patterns Performance Metrics09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial3

Models for Parallel System Machine Model Architecture Model Formal model of AM for designing and analyzing algorithm, e.g.RAM, PRAMProgramming Model 09/16/2014Describe architecture at the level of how processing units,memory organization, interconnect organized and executionmodel of instructionsComputation Model Describe machines at lowest level of abstraction of hardware,e.g., registers,Describe the machine from the programmer point of view: howthe programmer to codeCAPSL – Introduction to Parallel Programming and MPI Tutorial4

Parallel Programming Model Parallel Programming model Influenced by Architecture DesignProgramming LanguageCompilerRuntime Several criteria make them different 09/16/2014level of parallelism (instruction level, data level, loops level,procedural level)implicit or use-defined explicit specified parallelismhow parallel program parts are specifiedthe execution model of parallel units (SIMD, SPMD, Sync,Async)how to communicate (explicit comm or shared variables)CAPSL – Introduction to Parallel Programming and MPI Tutorial5

Outline Models for Parallel Systems Parallelization of Programs Levels of Parallelism Instruction levelData ParallelismLoop ParallelismFunctional/task Parallelism Parallel Programming Patterns Performance Metrics09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial6

also called scheduling. For a static decomposition, the assignment can be donein the initialization phase at program start (static scheduling). But scheduling canalso be done during program execution (dynamic scheduling).3. Mapping of processes or threads to physical processes or cores: In the simplest case, each process or thread is mapped to a separate processor or core, alsocalled execution unit in the following. If less cores than threads are available, TypicalParallelizationStepsmultiple threadsmust be mapped toa single core. This mapping can be done bysystem, but it could also besupported bystatements. the operatingPartition/Decomposition:Algorithmis programsplit intotasks Themainwithgoal ofthe mapping step is to get an equal utilization of the processors ordependenciescores while keeping communication between the processors as small as possible.Parallelization of Programs Scheduling: Tasks are assigned to processesThe parallelization steps are illustrated in Fig. 3.1. Mapping: Processes are mapped to physical processorsprocess 1process 2partitioningschedulingprocess 3P1P2P3P4process 4mappingFig. 3.1 Illustration of typical parallelization steps for a given sequential application algorithm.CAPSL –tasks,Introductionto ParallelProgramming andbetweenMPI Tutorial the tasks are identified. Theseanddependencies09/16/2014 The algorithm is first split into7

Parallelization of Programs Partition/Task Decomposition Task: a sequence of computation unit of parallelism,can be at different levels: instruction level, loop level,functional level Task Granularity: Coarse grained and fine grained Compromise between number of tasks andgranularity: enough tasks to keep all processors busyand enough granularity to amortize thescheduling/mapping overhead09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial8

Parallelization of Programs Assign tasks to processes/threads A process normally executes multiple different tasks The goal is load balance: each process should haveabout the same number of computations toperform Static (at the initialization phrase at program start)or Dynamic (during program execution)09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial9

Parallelization of Programs Mapping processes to physical processor/cores Each process or thread is mapped to a separateprocessor or core Goal: get an equal utilization of the physicalprocessors or cores while keeping communicationbetween the processors as small as possible.09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial10

Outline Models for Parallel Systems Parallelization of Programs Levels of Parallelism Instruction levelData ParallelismLoop ParallelismFunctional/task Parallelism Parallel Programming Patterns Performance Metrics09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial11

Parallelism at Instruction Level Task unit: one instructionDependency: True, Anti, Out between instructionsScheduling: schedule instructions to execute in differentfunction unitFor i (1 n)C[i] A[i] B[i]-A[i]*B[i]Loop:Loop:LD R1, @ALD R1, @ALD R2, @BLD R2, @BADD R3, R1, R2ADD R3, R1, R2 MUL R4, R1, R2MUL R4, R1, R2SUB R5, R3, R4SUB R5, R3, R4ST R5, @CST R5, @CJNZ LoopJNZ Loop09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial12

Parallelism at Instruction Level How to program Instruction Level Parallelism? Write assembly code by hand—find thedependencies and schedule by hand Hardware helps you automatically—Superscalarprocessor Compiler helps you automatically—schedulingtechniques for VLIW processor09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial13

Data Parallelism Task unit: one operation on single elementDependency: NoneSchedule: Pack different data element into SIMD instructionfor(i 0;i n;i )C[i] A[i] B[i]-A[i]*B[i]09/16/2014for i 0;i n;i 4){C[i i 3 A*i i 3 B*i i 3 A[i i 3 *B[i i 3 CAPSL – Introduction to Parallel Programming and MPI Tutorial14

Data Parallelism How to program Data Parallelism? Write assembly code by hand—using SIMDinstructions, e.g. MMX, SSE Let compiler help you automatically—Using autovectorization, e.g. gcc “-ftree-vectorize -msse2”, butnot as high efficient as hand coded Using data-parallel programming language09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial15

Loop Parallelism Task unit: one iteration of the loopDependency: dependencies between loop iterationsSchedule: schedule different loop iterations to execute ondifferent processors/coresfor(i 0;i n;i )C[i] A[i] B[i]-A[i]*B[i]without any dependenciesbetween iterations09/16/2014for(i 0;i n/2;i )C[i] A[i] B[i]-A[i]*B[i]for(i n/2 1;i n;i )C[i] A[i] B[i]-A[i]*B[i]CAPSL – Introduction to Parallel Programming and MPI TutorialCore0Core116

Loop Parallelism How to program Loop Parallelism? Write multithread code by hand—Decompose theloop into different threads Using high level programming language—e.g.,OpenMP#pragma omp parallelfor(i 0;i n;i )C[i] A[i] B[i]-A[i]*B[i] Compiler do it—Under research, some experimentalcompilers, e.g. PLUTO09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial17

Functional Parallelism Task unit: code segments (statements, basic block, loops,functions)Dependency: dependencies between tasksSchedule: schedule different tasks to execute on differentprocessors/coresFib(n) Fib(n-1) Fib(n-2)f1 Fib(n-1)f2 get result from core1 Core0return f1 f2;f Fib(n-2)return f09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialCore118

Functional Parallelism How to program Functional Parallelism? Write multithread code by hand—Decompose thecomputation into different threadsUsing high level programming language—e.g.,OpenMP, Cilk, Codeletint fib(int n){if (n 2)return n;int x cilk spawn fib(n-1);int y fib(n-2);cilk sync;return x y;}09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial19

Outline Models for Parallel Systems Parallelization of Programs Levels of Parallelism Instruction levelData ParallelismLoop ParallelismFunctional/task Parallelism Parallel Programming Patterns Performance Metrics09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial20

Parallel Programming Patterns Parallel programs consists of a collection of tasks that areexecuted by processes/threads Patterns: provide specific coordination structures forprocesses/threads 09/16/2014Fork-JoinSPMD and SIMDMaster–SlaveClient–ServerPipeliningTask PoolsProducer–ConsumerCAPSL – Introduction to Parallel Programming and MPI Tutorial21

Fork-JoinMain threadWork threadsSequentialForkWake up all threadsParallel tasksJoinSequential taskForkWake up all threads Initial time, there isonly one mainthread dosequential work Fork all workthreads to do thework in parallel Join all the threadsand continue to dosequential workParallel tasksJoin09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial22

Single Program and Multiple Data Each processorexecutes the samecopy of theprogramMultiple Datashared/local Each processor hasa logical copy ofdata Each processoruses p id to findtheir own part dataP009/16/2014P1P2CAPSL – Introduction to Parallel Programming and MPI Tutorial23

Master and Slave Master control themain function ave0Slave1ControlSlave2 Slave does theactualcomputationwhich is assignedby Master threadCAPSL – Introduction to Parallel Programming and MPI Tutorial24

Task PoolThread 0Store TaskRetrieve TaskTask PoolThread 2Thread 109/16/2014Thread 3 Tata structure in whichtasks to be performedare stored and fromwhich they can beretrieved for execution A fixed number ofthreads is used for theprocessing of the tasks a thread can generatenew tasks and insertthem intoCAPSL – Introduction to Parallel Programming and MPI Tutorial25

ta Buffer Producer threadsConsumer1produce data which areused as input byconsumer threadsConsumer2StoreConsumer3Producer309/16/2014 Common data buffer isused, which can beaccessed by both ofthreads Synchronization has tobe used to ensure acorrect coordinationbetween producer andconsumer threadsCAPSL – Introduction to Parallel Programming and MPI Tutorial26

Outline Models for Parallel Systems Parallelization of Programs Levels of Parallelism Instruction levelData ParallelismLoop ParallelismFunctional/task Parallelism Parallel Programming Patterns Performance Metrics09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial27

Performance Metrics for ParallelPrograms Execution Time Sequential execution time:Ts Parallel execution time:Tp Overhead: To pTp-Ts Speedup Speedup Ts/Tp09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial28

Adding N numbers using N 14commcompute Communication: LogN Compute: LogN Sequential: N-1 Speedup (N-1)/(2LogN) O(N/LogN)CAPSL – Introduction to Parallel Programming and MPI Tutorial29

Amdahl’s law A program execution time Ts is composed of afraction of sequential execution time Ts* f anda fraction of parallel execution time Ts* (1-f) Speedup 09/16/2014Ts11 1- f1- ffTs * f Ts *f ppCAPSL – Introduction to Parallel Programming and MPI Tutorial30

Amdahl’s law Do you really need parallel computing foryour program? Speedup is limited by the sequential part ofyour program What is the bottleneck, how many benefitscan you get if you try to parallelize it?09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial31

ReferencesIntroduction to Parallel Computinghttp://wwwusers.cs.umn.edu/ karypis/parbook/Parallel Programming: for Multicore andCluster SL – Introduction to Parallel Programming and MPI Tutorial32

MPI Programming: A TutorialHaitao WeiThanks to slides from Robert Paveland Daniel OrozcoUniversity of Delawarehttp://www.udel.eduComputer Architecture andParallel Systems Laboratoryhttp://www.capsl.udel.edu

MPI Stands for Message Passing applicationprogrammer Interface. It is a specification. There is not one MPI. The specification describes primitives that can beused to communicate and program. Inspired by theCommunicatingSequential Processespaper.09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial34

Why Learn MPI? MPI is the de facto standard to programMIMD systems. It can be used in SMP systems as well. Very versatile, can run on:– Symmetric or asymmetric systems– Local networks or over the internet– On serial processors09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial35

Why Learn MPI? Comparatively easy to use. All communication is explicit. Easy to learn Reasonably good performance Most importantly: Everyone already uses it09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial36

What is MPI?: An example MPI is like exchanging emails with youradvisor Your advisor gets hundreds of emails per day If he doesn’t know an email is coming, he can’trespond But if he is expecting an email, he’ll read it09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial37

MPI: Execution Model There are P processes that are created at thebeginning. All processes execute the same program. Processes communicate and synchronizeusing send and receive operations. Operations can be blocking or nonblocking.09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial38

The MPI Programming Model Write ONE program that everybody runs. Initialize the MPI library:– MPI Init Clean the MPI library at the end:– MPI Finalize09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial39

The BasicsMPI InitInitialize the MPI execution environmentMPI FinalizeTerminates MPI execution environmentint MPI Init( int *argc, char ***argv )SynopsisInput Parametersint MPI Finalize( void )argcPointer to the number of argumentsargvPointer to the argument vector09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial40

The BasicsMPI Comm rankDetermines the rank of the calling process in thecommunicatorSynopsisint MPI Comm rank( MPI Comm comm, int *rank )Input Argumentcommcommunicator (handle)Output Argumentrankrank of the calling process in the group of comm (integer)09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial41

The BasicsMPI Comm sizeDetermines the size of the group associated with acommunicatorSynopsisint MPI Comm size( MPI Comm comm, int *size )Input Parametercommcommunicator (handle)Output Parametersizenumber of processes in the group of comm (integer)09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial42

Code ExampleHello World!1 #include mpi.h 23 main(int argc, char *argv[])4 {5 int npes, myrank;67 MPI Init(&argc, &argv);8 MPI Comm size(MPI COMM WORLD, &npes);9 MPI Comm rank(MPI COMM WORLD, &myrank);10 printf("From process %d out of %d, Hello World!\n",11myrank, npes);12 MPI Finalize();13 }09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial./mpicc hello.c -o hello./mpirun -np 3 helloFrom process 2 out of 3, HelloWorld!From process 1 out of 3, HelloWorld!From process 0 out of 3, HelloWorld!43

Sending and Receiving Data Now let’s actually do something useful MPI, at it simplest, is a series of matchedsends and receives Host A sends a message to Host B. Host Breceives the message These sends and receives are blocking bydefault What is blocking?09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial44

A Visual ExampleWorking09/16/2014WorkingCAPSL – Introduction to Parallel Programming and MPI Tutorial45

A Visual ExampleShould I use redpaint?09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialWorking46

A Visual Example 09/16/2014WorkingCAPSL – Introduction to Parallel Programming and MPI Tutorial47

A Visual ExampleYay, I got anack. Back towork 09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialYes. Use redAck48

A Visual ExampleWorking09/16/2014WorkingCAPSL – Introduction to Parallel Programming and MPI Tutorial49

Now with code?First, let’s learn us some syntax!09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial50

MPI SendPerforms a blocking sendSynopsisint MPI Send(void *buf, int count, MPI Datatype datatype, int dest,int tag, MPI Comm comm)Input Parametersbufinitial address of send buffer (choice)countnumber of elements in send buffer (nonnegative integer)datatypedatatype of each send buffer element (handle)destrank of destination (integer)tagmessage tag (integer)commcommunicator (handle)09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial51

MPI RecvBlocking receive for a messageSynopsisint MPI Recv(void *buf, int count, MPI Datatype datatype, int source,int tag, MPI Comm comm, MPI Status *status)Output Parametersbufinitial address of receive buffer (choice)statusstatus object (Status)Input Parameterscountmaximum number of elements in receive buffer (integer)datatypedatatype of each receive buffer element (handle)sourcerank of source (integer)tagmessage tag (integer)commcommunicator (handle)09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial52

Now for ExamplesThe right waythe wrong wayMPI Comm rank (comm, &myrank);if (my rank 0) {MPI Send (sendbuf, count, MPI INT, 1,tag, comm);MPI Recv (recvbuf, count, MPI INT, 1,tag, comm, &status);}else if (my rank 1) {MPI Recv (recvbuf, count, MPI INT, 0,tag, comm, &status);MPI Send (sendbuf, count, MPI INT, 0,tag, comm);}09/16/2014MPI Comm rank (comm, &my rank);if (my rank 0) {MPI Recv (recvbuf, count, MPI INT, 1, tag,comm, &status);MPI Send (sendbuf, count, MPI INT, 1, tag,comm);}else if (my rank 1) {MPI Recv (recvbuf, count, MPI INT, 0, tag,comm, &status);MPI Send (sendbuf, count, MPI INT, 0, tag,comm);}CAPSL – Introduction to Parallel Programming and MPI Tutorial53

Huh? Why didn’t the wrong way work? Deadlock Both processes are waiting for a message toarrive.09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial54

Can we work around that?Yes!Non-blocking communication is one way09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial55

MPI Isend and MPI Irecv You can look online for the syntax And it is really long The simple logic is Start a send/recv Do some work Only check when you need the data09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial56

A Visual ExampleShould I use redpaint?09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialWorking57

A Visual ExampleWorking withthe green09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialWorking58

A Visual ExampleWorking withthe green09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialYes, use thered paint.Ack59

A Visual ExampleWorking withthe green09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialWorking60

A Visual ExampleOh, cool. Ican use red09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialWorking61

A Visual ExampleWorking withthe red09/16/2014CAPSL – Introduction to Parallel Programming and MPI TutorialWorking62

A Code ExampleThe Wrong Way done Right!MPI Comm rank (comm, &my rank);if (my rank 0) {MPI IRecv (recvbuf, count, MPI INT, 1, tag, comm, &recv request);MPI ISend (sendbuf, count, MPI INT, 1, tag, comm, &send request);MPI Wait(&recv request, &status);MPI Wait(&send request, &status);}else if (my rank 1) {MPI IRecv (recvbuf, count, MPI INT, 0, tag, comm, &recv request);MPI ISend (sendbuf, count, MPI INT, 0, tag, comm, &send request);MPI Wait(&recv request, &status);MPI Wait(&send request, &status);}09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial63

The Last Must-Have Tool What if we need to ensure one phase iscomplete before starting the next Finish washing your hands before you leave therestroom How to guarantee that in MPI? A series of blocking sends and receives? Something else?09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial64

Or just use Barrier A Barrier halts execution of code until allprocesses have signaled that they havereached a barrier Many ways to implement a barrier We may discuss these during the course Only use a Barrier if you need to Hurts performance due to idle processes09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial65

Barrier!MPI BarrierBlocks until all processes in the communicator have reached this routine.Synopsisint MPI Barrier( MPI Comm comm )Input Parametercommcommunicator (handle)09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial66

Code ExampleLet’s see how to use a BarrierMPI Comm rank (comm, &myrank);if (my rank 0) {//p0 does something in stage 1MPI Barrier(comm);//p0 does something in stage 2MPI Barrier(comm);}else if (my rank 1) {//p1 does something in stage 1MPI Barrier(comm);//p1 does something in stage 2MPI Barrier(comm);}09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial67

Advanced MPI Other types of Send/Recv Buffered: Copies data to another buffer MPI Ssend: Won’t return until recv is completed MPI Rsend: May only be used if matching recv isalready active MPI SendRecv: Combines Send and Receive intoa single command09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial68

Collective Operations An alternative to point to point operations. Involve communication and synchronizationbetween many processes. The two most common are: MPI Bcast( , root, .): All processes call the same function. All processes receive data from process root. MPI Reduce( , root, ) 09/16/2014A reduction operation is done with data from each processand the result is given to the root process.CAPSL – Introduction to Parallel Programming and MPI Tutorial69

Collective Example: Intuitive TA normally wants to inform you we have ahuge project. So TA “Broadcast” the information in a massemail. TA will also collect the “quiz” after themidterm exam a reduction is performed where all of you putyour quizzes in the basket of the root process(TA).09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial70

Dot Product#include "mpi.h”// Step4: collect resultmain(int argc, char* argv[]) {MPI Reduce(&loc dot, dot, 1, MPI FLOAT,//Step 1:initialize vector a and bMPI SUM, 0, MPI COMM WORLD);float loc dot 0.0f;float dot 0.0f;if(my rank 0)float a[N],b[N],loc dots[N];printf( "dot product %f",dot);for (i 0;i N;i ) {a[i] i;/* mpi is terminated. */b[i] i 1;}MPI Finalize();//Step2 initialize MPI}MPI Init(&argc, &argv);MPI Comm rank(MPI COMM WORLD,&my rank);MPI Comm size(MPI COMM WORLD,&p);//Step3: Each processor computes a local dot productloc n N/p;bn (my rank)*loc n;en bn loc n;loc dot 0.0;for (i bn;i en; i ) {loc dot loc dot a[i]*b[i];}09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial71

Thoughts on Advanced MPI Be very careful Using collectives may kills performance If only because they are blocking There may be special cases where thespecialized send and recv are useful But unless you are in HPC, use whatever ismost intuitive09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial72

Can I use MPI with Fortran? C ? YesMatlab? 09/16/2014Not easilyPython? Yes, and it is actually a very intuitive interface that I really likeJava? YesNot easilyCAPSL – Introduction to Parallel Programming and MPI Tutorial73

References The CSP paper http://portal.acm.org/citation.cfm?id 359576.359585 A Reference for MPI ww3/09/16/2014CAPSL – Introduction to Parallel Programming and MPI Tutorial74

Parallel Programming Patterns Performance Metrics 09/16/2014 CAPSL – Introduction to Parallel Programming and MPI Tutorial 3 . Models for Parallel System Machine Model Describe machines at lowest level of abstr

Related Documents:

Parallel patterns & programming frameworks Parallel programming gurus (1-10% of programmers) Parallel programming frameworks 3 2 Domain Experts End-user, application programs Application patterns & frameworks 11 The hope is for Domain Experts to create parallel code

1 1 Programming Paradigms ØImperative Programming – Fortran, C, Pascal ØFunctional Programming – Lisp ØObject Oriented Programming – Simula, C , Smalltalk ØLogic Programming - Prolog 2 Parallel Programming A misconception occurs that parallel

learning-concurrent-programming-scala 5/9. Parallel Programming Sequential programming: at every point in time, one part of program executing: Parallel programming: multiple parts of program execute at once: In both cases, we compute the same functionality

quence, existing graph analytics pipelines compose graph-parallel and data-parallel systems using external storage systems, leading to extensive data movement and complicated programming model. To address these challenges we introduce GraphX, a distributed graph computation framework that unifies graph-parallel and data-parallel computation.

CISC 879 : Advanced Parallel Programming Parallel Patterns Book Patterns for Parallel Programming. Mattson et al. (2005) Four Design Spaces Finding Concurrency Algorithm Structure Map tasks to processes Supporting Structures

Parallel computing, distributed computing, java, ITU-PRP . 1 Introduction . ITU-PRP provides an all-in-one solution for Parallel Programmers, with a Parallel Programming Framework and a . JADE (Java Agent Development Framework) [6] as another specific Framework implemented on Java, provides a framework for Parallel Processing.

Series-Parallel Circuits If we combined a series circuit with a parallel circuit we produce a Series-Parallel circuit. R1 and R2 are in parallel and R3 is in series with R1 ǁ R2. The double lines between R1 and R2 is a symbol for parallel. We need to calculate R1 ǁ R2 first before adding R3.

TAMINCO GROUP NV Pantserschipstraat 207, 9000 Ghent, Belgium Enterprise number 0891.533.631 Offering of New Shares (with VVPR strips attached) and Existing Shares