automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
for (t = 0; t < STEP; t++) {L1: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2: for (k = 0; k < M; k++)C[D[k]] = update_3(k);}(a) Original programt = 0L1_0: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_0: for (k = 0; k < M; k++)C[D[k]] = update_3(k);t = 1L1_1: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_1: for (k = 0; k < M; k++)C[D[k]] = update_3(k);...t = STEPL1_STEP: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_STEP: for (k = 0; k < M; k++)C[D[k]] = update_3(k);(b) After unrolling the outermost loopFigure 4.1: Example program demonstrating the limitation of DOMORE transformationbarriers which pessimistically synchronize to enforce dependences, speculative techniquesallow threads to execute past barriers without stalling. Speculation allows programs to optimisticallyexecute potentially dependent instructions and later check for misspeculation.If misspeculation occurs, the program recovers using checkpointed non-speculative state.Speculative barriers improve performance by synchronizing only on misspeculation.An evaluation over eight benchmark applications on a 24-core machine shows thatSPECCROSS achieves a geomean speedup of 4.6× over the best sequential execution. Thiscompares favorably to a 1.3× speedup obtained by parallel execution without any crossinvocationparallelization.In the following sections, we first motivate software-only speculative barrier by comparingit with all other options including non-speculative barrier and hardware-based speculativebarrier. Then details about design and implementation of SPECCROSS are given,50
after which we describe how SPECCROSS is applied for automatic parallelization.4.1 Motivation and Overview4.1.1 Limitations of analysis-based parallelizationFigure 4.2(a) shows a code example before parallelization. In this example, loop L1 updatesarray elements in array A while loop L2 reads the elements from array A and usesthe values to update array B. The whole process is repeated STEP times. Both L1 and L2can be individually parallelized using DOALL [1]. However, dependences between L1 andL2 prevent the outer loop from being parallelized. Ideally, programmers should only synchronizeiterations that depend on each other, without stalling the execution of independentiterations. If static analysis [50, 72, 78] could prove that each thread accesses a separatesection of arrays A and B, no synchronization is necessary between two adjacent loop invocations(Figure 4.2(b)). However, since arrays A and B are accessed in an irregular manner(through index arrays C and D), static analysis cannot determine the dependence patternbetween L1 and L2. As a result, this naïve parallelization may lead to incorrect runtimebehavior.Alternatively, if static analysis could determine a dependence pattern between iterationsfrom two invocations, e.g., iteration 1 from L2 always depends on iteration 2 fromL1, then fine-grained synchronization can be used to synchronize only those iterations. Butthis requires accurate analysis about the dependence pattern, which is in turn, limited by theconservative nature of static analysis. Instead, barrier synchronization is used to globallysynchronize all threads (Figure 4.2(c)). Barrier synchronization conservatively assumes dependencesbetween any pair of iterations from two different loop invocations. All threadsare forced to stall at barriers after each parallel invocation, which greatly diminishes effectiveparallelism. Figure 4.3 shows the overhead introduced by barrier synchronizationson eight programs parallelized with 8 and 24 threads. Barrier overhead refers to the total51
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
- Page 80 and 81: Operation DescriptionFunctions for
- Page 82 and 83: Main thread:main() {init();create_t
- Page 84 and 85: implemented in the Liberty parallel
- Page 86 and 87: Algorithm 5: Pseudo-code for SPECCR
- Page 88 and 89: CROSS, since SPECCROSS can be appli
- Page 90 and 91: techniques.Synchronization via sche
- Page 92 and 93: Source Benchmark Function % of exec
- Page 94 and 95: applied to the outermost loop, gene
- Page 96 and 97: 5.2 SPECCROSS Performance Evaluatio
- Page 98 and 99: and the number of checking requests
- Page 100 and 101: 8x7xno misspec.with misspec.Geomean
- Page 102 and 103: This thesisPrevious workSpeedup (x)
- Page 104 and 105: Program Speedup6x5x4x3x2xLOCALWRITE
- Page 106 and 107: for DOMORE and SPECCROSS. Others (e
- Page 108 and 109: programs and it achieves a geomean
- Page 110 and 111: Bibliography[1] R. Allen and K. Ken
- Page 112 and 113: [15] R. Cytron. DOACROSS: Beyond ve
for (t = 0; t < STEP; t++) {L1: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2: for (k = 0; k < M; k++)C[D[k]] = update_3(k);}(a) Original programt = 0L1_0: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_0: for (k = 0; k < M; k++)C[D[k]] = update_3(k);t = 1L1_1: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_1: for (k = 0; k < M; k++)C[D[k]] = update_3(k);...t = STEPL1_STEP: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_STEP: for (k = 0; k < M; k++)C[D[k]] = update_3(k);(b) After unrolling the outermost loopFigure 4.1: Example program demonstrating the limitation of DOMORE transformationbarriers which pessimistically synchronize to enforce dependences, speculative techniquesallow threads to execute past barriers without stalling. Speculation allows programs to optimisticallyexecute potentially dependent instructions and later check for misspeculation.If misspeculation occurs, the program recovers <strong>using</strong> checkpointed non-speculative state.Speculative barriers improve performance by synchronizing only on misspeculation.An evaluation over eight benchmark applications on a 24-core machine shows thatSPECCROSS achieves a geomean speedup of 4.6× over the best sequential execution. Thiscompares favorably to a 1.3× speedup obtained by parallel execution without any <strong>cross</strong><strong>invocation</strong>parallelization.In the following sections, we first motivate software-only speculative barrier by comparingit with all other options including non-speculative barrier and hardware-based speculativebarrier. Then details about design and implementation of SPECCROSS are given,50