automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
the graph stands for an iteration in a certain loop invocation (e.g., block 1.5 is iteration 5in the first invocation of loop L1). Typically, threads do not reach barriers at the same timefor a variety of reasons. For instance, each thread may be assigned different number ofiterations and the execution time of each iteration may vary. All threads are forced to stallat barriers after each parallel invocation, losing potential parallelism. Figure 1.4(b) showsa parallel execution plan after naïvely removing barriers. Without barriers, iterations frombefore and after a barrier may overlap, resulting in better performance.A few automatic parallelization techniques exploit cross-invocation parallelism [22,50, 72, 76]. Cross-invocation parallelization requires techniques for respecting crossinvocationdependences without resorting to coarse-grained barrier synchronization. Sometechniques [22, 76] respect dependences by combining several small loops into a singlelarger loop. This approach side-steps the problem of exploiting cross-invocation parallelismby converting it into cross-iteration parallelism. Other approaches [50, 72] carefullypartition the iteration space in each loop invocation so that cross-invocation dependencesare never split between threads. However, both techniques rely on static analyses. Consequently,they cannot adapt to the dependence patterns manifested by particular inputsat runtime. Many statically detected dependences may only manifest under certain inputconditions. For many programs, these dependences rarely manifest given the most commonprogram inputs. By adapting to the dependence patterns of specific inputs at runtime,programs can exploit additional cross-invocation parallelism to achieve greater scalability.6
main () {f();}f() {for (t = 0; t < TIMESTEP; t++) {L1: for (i = 0; i < M; i++) {A[i] = do_work(B[i], B[i+1]);}L2: for (j = 1; j < M+1; j++) {B[j] = do_work(A[j-1], A[j]);}}}(a) Sequential Programmain () {for (i = 0; i < NUM_THREADS; i++)create_thread(par_f, i);}par_f(threadID) {for (t = 0; t < TIMESTEP; t++) {L1: for (i = threadID; i < M; i=i+NUM_THREADS) {A[i] = do_work(B[i], B[i+1]);}pthread_barrier_wait(&barrier);L2: for (j = threadID; j < M+1; j=j+NUM_THREADS) {B[j] = do_work(A[j-1], A[j]);}pthread_barrier_wait(&barrier);}}(b) Parallelized ProgramFigure 1.3: Example of parallelizing a program with barriersWorker thread 1 Worker thread 2 Worker thread 3 Worker thread 4Worker thread 1 Worker thread 2 Worker thread 3 Worker thread 41.11.2 1.3 1.4L1Invocation 11.11.2 1.3 1.41.52.12.51.62.22.6Barrier2.3 2.42.72.8L2Invocation 11.52.12.53.11.62.22.63.22.32.72.42.83.13.53.23.6Barrier3.33.4L1Invocation 23.54.14.53.64.24.63.34.34.73.44.44.8Barrier4.14.24.64.34.74.44.8L2Invocation 2Time4.5(a) Parallel Execution with Barriers(b) Naïve Parallel Execution without BarriersFigure 1.4: Comparison between executions with and without barriers. A block with labelx.y represents the y th iteration in the x th loop invocation.7
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
main () {f();}f() {for (t = 0; t < TIMESTEP; t++) {L1: for (i = 0; i < M; i++) {A[i] = do_work(B[i], B[i+1]);}L2: for (j = 1; j < M+1; j++) {B[j] = do_work(A[j-1], A[j]);}}}(a) Sequential Programmain () {for (i = 0; i < NUM_THREADS; i++)create_thread(par_f, i);}par_f(threadID) {for (t = 0; t < TIMESTEP; t++) {L1: for (i = threadID; i < M; i=i+NUM_THREADS) {A[i] = do_work(B[i], B[i+1]);}pthread_barrier_wait(&barrier);L2: for (j = threadID; j < M+1; j=j+NUM_THREADS) {B[j] = do_work(A[j-1], A[j]);}pthread_barrier_wait(&barrier);}}(b) Parallelized ProgramFigure 1.3: Example of parallelizing a program with barriersWorker thread 1 Worker thread 2 Worker thread 3 Worker thread 4Worker thread 1 Worker thread 2 Worker thread 3 Worker thread 41.11.2 1.3 1.4L1Invocation 11.11.2 1.3 1.41.52.12.51.62.22.6Barrier2.3 2.42.72.8L2Invocation 11.52.12.53.11.62.22.63.22.32.72.42.83.13.53.23.6Barrier3.33.4L1Invocation 23.54.14.53.64.24.63.34.34.73.44.44.8Barrier4.14.24.64.34.74.44.8L2Invocation 2Time4.5(a) Parallel Execution with Barriers(b) Naïve Parallel Execution without BarriersFigure 1.4: Comparison between executions with and without barriers. A block with labelx.y represents the y th iteration in the x th loop <strong>invocation</strong>.7