automatically exploiting cross-invocation parallelism using runtime ...

automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...

dataspace.princeton.edu
from dataspace.princeton.edu More from this publisher
13.07.2015 Views

for (t = 0; t < STEP; t++) {L1: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2: for (k = 0; k < M; k++)C[D[k]] = update_3(k);}(a) Original programt = 0L1_0: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_0: for (k = 0; k < M; k++)C[D[k]] = update_3(k);t = 1L1_1: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_1: for (k = 0; k < M; k++)C[D[k]] = update_3(k);...t = STEPL1_STEP: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_STEP: for (k = 0; k < M; k++)C[D[k]] = update_3(k);(b) After unrolling the outermost loopFigure 4.1: Example program demonstrating the limitation of DOMORE transformationbarriers which pessimistically synchronize to enforce dependences, speculative techniquesallow threads to execute past barriers without stalling. Speculation allows programs to optimisticallyexecute potentially dependent instructions and later check for misspeculation.If misspeculation occurs, the program recovers using checkpointed non-speculative state.Speculative barriers improve performance by synchronizing only on misspeculation.An evaluation over eight benchmark applications on a 24-core machine shows thatSPECCROSS achieves a geomean speedup of 4.6× over the best sequential execution. Thiscompares favorably to a 1.3× speedup obtained by parallel execution without any crossinvocationparallelization.In the following sections, we first motivate software-only speculative barrier by comparingit with all other options including non-speculative barrier and hardware-based speculativebarrier. Then details about design and implementation of SPECCROSS are given,50

after which we describe how SPECCROSS is applied for automatic parallelization.4.1 Motivation and Overview4.1.1 Limitations of analysis-based parallelizationFigure 4.2(a) shows a code example before parallelization. In this example, loop L1 updatesarray elements in array A while loop L2 reads the elements from array A and usesthe values to update array B. The whole process is repeated STEP times. Both L1 and L2can be individually parallelized using DOALL [1]. However, dependences between L1 andL2 prevent the outer loop from being parallelized. Ideally, programmers should only synchronizeiterations that depend on each other, without stalling the execution of independentiterations. If static analysis [50, 72, 78] could prove that each thread accesses a separatesection of arrays A and B, no synchronization is necessary between two adjacent loop invocations(Figure 4.2(b)). However, since arrays A and B are accessed in an irregular manner(through index arrays C and D), static analysis cannot determine the dependence patternbetween L1 and L2. As a result, this naïve parallelization may lead to incorrect runtimebehavior.Alternatively, if static analysis could determine a dependence pattern between iterationsfrom two invocations, e.g., iteration 1 from L2 always depends on iteration 2 fromL1, then fine-grained synchronization can be used to synchronize only those iterations. Butthis requires accurate analysis about the dependence pattern, which is in turn, limited by theconservative nature of static analysis. Instead, barrier synchronization is used to globallysynchronize all threads (Figure 4.2(c)). Barrier synchronization conservatively assumes dependencesbetween any pair of iterations from two different loop invocations. All threadsare forced to stall at barriers after each parallel invocation, which greatly diminishes effectiveparallelism. Figure 4.3 shows the overhead introduced by barrier synchronizationson eight programs parallelized with 8 and 24 threads. Barrier overhead refers to the total51

for (t = 0; t < STEP; t++) {L1: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2: for (k = 0; k < M; k++)C[D[k]] = update_3(k);}(a) Original programt = 0L1_0: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_0: for (k = 0; k < M; k++)C[D[k]] = update_3(k);t = 1L1_1: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_1: for (k = 0; k < M; k++)C[D[k]] = update_3(k);...t = STEPL1_STEP: for (i = 0; i < M; i++) {A[i] = update_1(B[C[i]]);B[C[i]] = update_2(i);}L2_STEP: for (k = 0; k < M; k++)C[D[k]] = update_3(k);(b) After unrolling the outermost loopFigure 4.1: Example program demonstrating the limitation of DOMORE transformationbarriers which pessimistically synchronize to enforce dependences, speculative techniquesallow threads to execute past barriers without stalling. Speculation allows programs to optimisticallyexecute potentially dependent instructions and later check for misspeculation.If misspeculation occurs, the program recovers <strong>using</strong> checkpointed non-speculative state.Speculative barriers improve performance by synchronizing only on misspeculation.An evaluation over eight benchmark applications on a 24-core machine shows thatSPECCROSS achieves a geomean speedup of 4.6× over the best sequential execution. Thiscompares favorably to a 1.3× speedup obtained by parallel execution without any <strong>cross</strong><strong>invocation</strong>parallelization.In the following sections, we first motivate software-only speculative barrier by comparingit with all other options including non-speculative barrier and hardware-based speculativebarrier. Then details about design and implementation of SPECCROSS are given,50

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!